Class: Rumale::LinearModel::SGDRegressor
- Inherits:
-
SGDEstimator
- Object
- Base::Estimator
- BaseEstimator
- SGDEstimator
- Rumale::LinearModel::SGDRegressor
- Includes:
- Base::Regressor
- Defined in:
- rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb
Overview
SGDRegressor is a class that implements linear regressor with stochastic gradient descent optimization.
Reference
-
Shalev-Shwartz, S., and Singer, Y., “Pegasos: Primal Estimated sub-GrAdient SOlver for SVM,” Proc. ICML’07, pp. 807–814, 2007.
-
Tsuruoka, Y., Tsujii, J., and Ananiadou, S., “Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty,” Proc. ACL’09, pp. 477–485, 2009.
-
Bottou, L., “Large-Scale Machine Learning with Stochastic Gradient Descent,” Proc. COMPSTAT’10, pp. 177–186, 2010.
Instance Attribute Summary collapse
-
#rng ⇒ Random
readonly
Return the random generator for performing random sampling.
Attributes inherited from BaseEstimator
Attributes inherited from Base::Estimator
Instance Method Summary collapse
-
#fit(x, y) ⇒ Object
Fit the model with given training data.
-
#initialize(loss: 'squared_error', learning_rate: 0.01, decay: nil, momentum: 0.9, penalty: 'l2', reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, epsilon: 0.1, max_iter: 1000, batch_size: 50, tol: 1e-4, n_jobs: nil, verbose: false, random_seed: nil) ⇒ SGDRegressor
constructor
Create a new linear regressor with stochastic gradient descent optimization.
-
#partial_fit(x, y) ⇒ SGDRegressor
Perform 1-epoch of stochastic gradient descent optimization with given training data.
-
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
Methods included from Base::Regressor
Constructor Details
#initialize(loss: 'squared_error', learning_rate: 0.01, decay: nil, momentum: 0.9, penalty: 'l2', reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, epsilon: 0.1, max_iter: 1000, batch_size: 50, tol: 1e-4, n_jobs: nil, verbose: false, random_seed: nil) ⇒ SGDRegressor
Create a new linear regressor with stochastic gradient descent optimization.
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb', line 59 def initialize(loss: 'squared_error', learning_rate: 0.01, decay: nil, momentum: 0.9, penalty: 'l2', reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, epsilon: 0.1, max_iter: 1000, batch_size: 50, tol: 1e-4, n_jobs: nil, verbose: false, random_seed: nil) super() @params.merge!( loss: loss, learning_rate: learning_rate, decay: decay, momentum: momentum, penalty: penalty, reg_param: reg_param, l1_ratio: l1_ratio, fit_bias: fit_bias, bias_scale: bias_scale, epsilon: epsilon, max_iter: max_iter, batch_size: batch_size, tol: tol, n_jobs: n_jobs, verbose: verbose, random_seed: random_seed ) @params[:decay] ||= @params[:reg_param] * @params[:learning_rate] @params[:random_seed] ||= srand @rng = Random.new(@params[:random_seed]) @penalty_type = @params[:penalty] @loss_func = case @params[:loss] when Rumale::LinearModel::Loss::MeanSquaredError::NAME Rumale::LinearModel::Loss::MeanSquaredError.new when Rumale::LinearModel::Loss::EpsilonInsensitive::NAME Rumale::LinearModel::Loss::EpsilonInsensitive.new(epsilon: @params[:epsilon]) else raise ArgumentError, "given loss '#{loss}' is not supported." end end |
Instance Attribute Details
#rng ⇒ Random (readonly)
Return the random generator for performing random sampling.
29 30 31 |
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb', line 29 def rng @rng end |
Instance Method Details
#fit(x, y) ⇒ Object
Fit the model with given training data.
@retu:rn [SGDRegressor] The learned regressor itself.
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb', line 103 def fit(x, y) x = Rumale::Validation.check_convert_sample_array(x) y = Rumale::Validation.check_convert_target_value_array(y) Rumale::Validation.check_sample_size(x, y) n_outputs = y.shape[1].nil? ? 1 : y.shape[1] n_features = x.shape[1] if n_outputs > 1 @weight_vec = Numo::DFloat.zeros(n_outputs, n_features) @bias_term = Numo::DFloat.zeros(n_outputs) if enable_parallel? models = parallel_map(n_outputs) { |n| partial_fit_(x, y[true, n]) } n_outputs.times { |n| @weight_vec[n, true], @bias_term[n] = models[n] } else n_outputs.times { |n| @weight_vec[n, true], @bias_term[n] = partial_fit_(x, y[true, n]) } end else @weight_vec, @bias_term = partial_fit_(x, y) end self end |
#partial_fit(x, y) ⇒ SGDRegressor
Perform 1-epoch of stochastic gradient descent optimization with given training data.
132 133 134 135 136 137 138 139 140 141 142 143 144 |
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb', line 132 def partial_fit(x, y) x = Rumale::Validation.check_convert_sample_array(x) y = Rumale::Validation.check_convert_target_value_array(y) Rumale::Validation.check_sample_size(x, y) n_features = x.shape[1] n_features += 1 if fit_bias? need_init = @weight.nil? || @weight.shape[0] != n_features @weight_vec, @bias_term = partial_fit_(x, y, max_iter: 1, init: need_init) self end |
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
150 151 152 153 154 |
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_regressor.rb', line 150 def predict(x) x = Rumale::Validation.check_convert_sample_array(x) x.dot(@weight_vec.transpose) + @bias_term end |