rfgb.boosting module

Core methods for performing learning and inference, such as computing gradients, updating gradients, and performing inference.

Documentation

rfgb.boosting.computeAdviceGradient(example)[source]

Proves each clause (Prover.prove()) and computes the advice gradient as NumberTrue - NumberFalse.

Parameters:example
rfgb.boosting.computeSumOfGradients(example, trees, data)[source]

Computes new gradients for an example.

Parameters:
  • example
  • trees
  • data
rfgb.boosting.inferTreeValue(clauses, query, data)[source]

Returns the probability of query given data and the clauses learned.

Parameters:
  • clauses
  • query
  • data
rfgb.boosting.performInference(testData, trees)[source]

Computes the probabilities for test examples.

Parameters:
  • testData (utils.Data object.) – Data for testing.
  • trees (list.) – List of strings representing learned decision trees.

Example:

from rfgb.boosting import performInference
rfgb.boosting.updateGradients(data, trees, loss='LS', delta=None)[source]

Update gradients of the data.

Parameters:
  • data (utils.Data object.) – Training or testing data (with parameters).
  • trees (list.) – List of strings representing trees.
  • loss (str.) – Loss function for regression (currently implemented: ‘LS’, ‘LAD’, ‘Huber’).
  • delta (float) – Delta value for Huber loss.

Example:

from rfgb.boosting import updateGradients