Class: Rumale::Ensemble::GradientBoostingRegressor

Inherits:
Base::Estimator show all
Includes:
Base::Regressor
Defined in:
rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb

Overview

GradientBoostingRegressor is a class that implements gradient tree boosting for regression. The class use L2 loss for the loss function.

Reference

  • Friedman, J H. “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics, 29 (5), pp. 1189–1232, 2001.

  • Friedman, J H. “Stochastic Gradient Boosting,” Computational Statistics and Data Analysis, 38 (4), pp. 367–378, 2002.

  • Chen, T., and Guestrin, C., “XGBoost: A Scalable Tree Boosting System,” Proc. KDD’16, pp. 785–794, 2016.

Examples:

require 'rumale/ensemble/gradient_boosting_regressor'

estimator =
  Rumale::Ensemble::GradientBoostingRegressor.new(
    n_estimators: 100, learning_rate: 0.3, reg_lambda: 0.001, random_seed: 1)
estimator.fit(training_samples, traininig_values)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes inherited from Base::Estimator

#params

Instance Method Summary collapse

Methods included from Base::Regressor

#score

Constructor Details

#initialize(n_estimators: 100, learning_rate: 0.1, reg_lambda: 0.0, subsample: 1.0, max_depth: nil, max_leaf_nodes: nil, min_samples_leaf: 1, max_features: nil, n_jobs: nil, random_seed: nil) ⇒ GradientBoostingRegressor

Create a new regressor with gradient tree boosting.

Parameters:

  • n_estimators (Integer) (defaults to: 100)

    The numeber of trees for contructing regressor.

  • learning_rate (Float) (defaults to: 0.1)

    The boosting learining rate

  • reg_lambda (Float) (defaults to: 0.0)

    The L2 regularization term on weight.

  • subsample (Float) (defaults to: 1.0)

    The subsampling ratio of the training samples.

  • max_depth (Integer) (defaults to: nil)

    The maximum depth of the tree. If nil is given, decision tree grows without concern for depth.

  • max_leaf_nodes (Integer) (defaults to: nil)

    The maximum number of leaves on decision tree. If nil is given, number of leaves is not limited.

  • min_samples_leaf (Integer) (defaults to: 1)

    The minimum number of samples at a leaf node.

  • max_features (Integer) (defaults to: nil)

    The number of features to consider when searching optimal split point. If nil is given, split process considers all features.

  • n_jobs (Integer) (defaults to: nil)

    The number of jobs for running the fit and predict methods in parallel. If nil is given, the methods do not execute in parallel. If zero or less is given, it becomes equal to the number of processors. This parameter is ignored if the Parallel gem is not loaded.

  • random_seed (Integer) (defaults to: nil)

    The seed value using to initialize the random generator. It is used to randomly determine the order of features when deciding spliting point.



63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 63

def initialize(n_estimators: 100, learning_rate: 0.1, reg_lambda: 0.0, subsample: 1.0,
               max_depth: nil, max_leaf_nodes: nil, min_samples_leaf: 1,
               max_features: nil, n_jobs: nil, random_seed: nil)
  super()
  @params = {
    n_estimators: n_estimators,
    learning_rate: learning_rate,
    reg_lambda: reg_lambda,
    subsample: subsample,
    max_depth: max_depth,
    max_leaf_nodes: max_leaf_nodes,
    min_samples_leaf: min_samples_leaf,
    max_features: max_features,
    n_jobs: n_jobs,
    random_seed: random_seed || srand
  }
  @rng = Random.new(@params[:random_seed])
end

Instance Attribute Details

#estimatorsArray<GradientTreeRegressor> (readonly)

Return the set of estimators.

Returns:

  • (Array<GradientTreeRegressor>)

    or [Array<Array<GradientTreeRegressor>>]



33
34
35
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 33

def estimators
  @estimators
end

#feature_importancesNumo::DFloat (readonly)

Return the importance for each feature. The feature importances are calculated based on the numbers of times the feature is used for splitting.

Returns:

  • (Numo::DFloat)

    (size: n_features)



38
39
40
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 38

def feature_importances
  @feature_importances
end

#rngRandom (readonly)

Return the random generator for random selection of feature index.

Returns:

  • (Random)


42
43
44
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 42

def rng
  @rng
end

Instance Method Details

#apply(x) ⇒ Numo::Int32

Return the index of the leaf that each sample reached.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the values.

Returns:

  • (Numo::Int32)

    (shape: [n_samples, n_estimators]) Leaf index for sample.



128
129
130
131
132
133
134
135
136
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 128

def apply(x)
  n_outputs = @estimators.first.is_a?(Array) ? @estimators.size : 1
  leaf_ids = if n_outputs > 1
               Array.new(n_outputs) { |n| @estimators[n].map { |tree| tree.apply(x) } }
             else
               @estimators.map { |tree| tree.apply(x) }
             end
  Numo::Int32[*leaf_ids].transpose.dup
end

#fit(x, y) ⇒ GradientBoostingRegressor

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::DFloat)

    (shape: [n_samples]) The target values to be used for fitting the model.

Returns:



87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 87

def fit(x, y)
  # initialize some variables.
  n_features = x.shape[1]
  @params[:max_features] = n_features if @params[:max_features].nil?
  @params[:max_features] = [[1, @params[:max_features]].max, n_features].min # rubocop:disable Style/ComparableClamp
  n_outputs = y.shape[1].nil? ? 1 : y.shape[1]
  # train regressor.
  @base_predictions = n_outputs > 1 ? y.mean(0) : y.mean
  @estimators = if n_outputs > 1
                  multivar_estimators(x, y)
                else
                  partial_fit(x, y, @base_predictions)
                end
  # calculate feature importances.
  @feature_importances = if n_outputs > 1
                           multivar_feature_importances
                         else
                           @estimators.sum(&:feature_importances)
                         end
  self
end

#predict(x) ⇒ Numo::DFloat

Predict values for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the values.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples]) Predicted values per sample.



113
114
115
116
117
118
119
120
121
122
# File 'rumale-ensemble/lib/rumale/ensemble/gradient_boosting_regressor.rb', line 113

def predict(x)
  n_outputs = @estimators.first.is_a?(Array) ? @estimators.size : 1
  if n_outputs > 1
    multivar_predict(x)
  elsif enable_parallel?
    parallel_map(@params[:n_estimators]) { |n| @estimators[n].predict(x) }.sum + @base_predictions
  else
    @estimators.sum { |tree| tree.predict(x) } + @base_predictions
  end
end