Class: Rumale::LinearModel::SGDClassifier

Inherits:
SGDEstimator show all
Includes:
Base::Classifier
Defined in:
rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb

Overview

SGDClassifier is a class that implements linear classifier with stochastic gradient descent optimization.

Reference

  • Shalev-Shwartz, S., and Singer, Y., “Pegasos: Primal Estimated sub-GrAdient SOlver for SVM,” Proc. ICML’07, pp. 807–814, 2007.

  • Tsuruoka, Y., Tsujii, J., and Ananiadou, S., “Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty,” Proc. ACL’09, pp. 477–485, 2009.

  • Bottou, L., “Large-Scale Machine Learning with Stochastic Gradient Descent,” Proc. COMPSTAT’10, pp. 177–186, 2010.

Examples:

require 'rumale/linear_model/sgd_classifier'

estimator =
  Rumale::LinearModel::SGDClassifier.new(loss: 'hinge', reg_param: 1.0, max_iter: 1000, batch_size: 50, random_seed: 1)
estimator.fit(training_samples, traininig_labels)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes inherited from BaseEstimator

#bias_term, #weight_vec

Attributes inherited from Base::Estimator

#params

Instance Method Summary collapse

Methods included from Base::Classifier

#score

Constructor Details

#initialize(loss: 'hinge', learning_rate: 0.01, decay: nil, momentum: 0.9, penalty: 'l2', reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, max_iter: 1000, batch_size: 50, tol: 1e-4, n_jobs: nil, verbose: false, random_seed: nil) ⇒ SGDClassifier

Create a new linear classifier with stochastic gradient descent optimization.

Parameters:

  • loss (String) (defaults to: 'hinge')

    The loss function to be used (‘hinge’ and ‘log_loss’).

  • learning_rate (Float) (defaults to: 0.01)

    The initial value of learning rate. The learning rate decreases as the iteration proceeds according to the equation: learning_rate / (1 + decay * t).

  • decay (Float) (defaults to: nil)

    The smoothing parameter for decreasing learning rate as the iteration proceeds. If nil is given, the decay sets to ‘reg_param * learning_rate’.

  • momentum (Float) (defaults to: 0.9)

    The momentum factor.

  • penalty (String) (defaults to: 'l2')

    The regularization type to be used (‘l1’, ‘l2’, and ‘elasticnet’).

  • l1_ratio (Float) (defaults to: 0.5)

    The elastic-net type regularization mixing parameter. If penalty set to ‘l2’ or ‘l1’, this parameter is ignored. If l1_ratio = 1, the regularization is similar to Lasso. If l1_ratio = 0, the regularization is similar to Ridge. If 0 < l1_ratio < 1, the regularization is a combination of L1 and L2.

  • reg_param (Float) (defaults to: 1.0)

    The regularization parameter.

  • fit_bias (Boolean) (defaults to: true)

    The flag indicating whether to fit the bias term.

  • bias_scale (Float) (defaults to: 1.0)

    The scale of the bias term.

  • max_iter (Integer) (defaults to: 1000)

    The maximum number of epochs that indicates how many times the whole data is given to the training process.

  • batch_size (Integer) (defaults to: 50)

    The size of the mini batches.

  • tol (Float) (defaults to: 1e-4)

    The tolerance of loss for terminating optimization.

  • n_jobs (Integer) (defaults to: nil)

    The number of jobs for running the fit and predict methods in parallel. If nil is given, the methods do not execute in parallel. If zero or less is given, it becomes equal to the number of processors. This parameter is ignored if the Parallel gem is not loaded.

  • verbose (Boolean) (defaults to: false)

    The flag indicating whether to output loss during iteration.

  • random_seed (Integer) (defaults to: nil)

    The seed value using to initialize the random generator.



63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 63

def initialize(loss: 'hinge', learning_rate: 0.01, decay: nil, momentum: 0.9,
               penalty: 'l2', reg_param: 1.0, l1_ratio: 0.5,
               fit_bias: true, bias_scale: 1.0,
               max_iter: 1000, batch_size: 50, tol: 1e-4,
               n_jobs: nil, verbose: false, random_seed: nil)
  super()
  @params.merge!(
    loss: loss,
    learning_rate: learning_rate,
    decay: decay,
    momentum: momentum,
    penalty: penalty,
    reg_param: reg_param,
    l1_ratio: l1_ratio,
    fit_bias: fit_bias,
    bias_scale: bias_scale,
    max_iter: max_iter,
    batch_size: batch_size,
    tol: tol,
    n_jobs: n_jobs,
    verbose: verbose,
    random_seed: random_seed
  )
  @params[:decay] ||= @params[:reg_param] * @params[:learning_rate]
  @params[:random_seed] ||= srand
  @rng = Random.new(@params[:random_seed])
  @penalty_type = @params[:penalty]
  @loss_func = case @params[:loss]
               when Rumale::LinearModel::Loss::HingeLoss::NAME
                 Rumale::LinearModel::Loss::HingeLoss.new
               when Rumale::LinearModel::Loss::LogLoss::NAME
                 Rumale::LinearModel::Loss::LogLoss.new
               else
                 raise ArgumentError, "given loss '#{loss}' is not supported."
               end
end

Instance Attribute Details

#classesNumo::Int32 (readonly)

Return the class labels.

Returns:

  • (Numo::Int32)

    (shape: [n_classes])



30
31
32
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 30

def classes
  @classes
end

#rngRandom (readonly)

Return the random generator for performing random sampling.

Returns:

  • (Random)


34
35
36
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 34

def rng
  @rng
end

Instance Method Details

#decision_function(x) ⇒ Numo::DFloat

Calculate confidence scores for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to compute the scores.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_classes]) Confidence score per sample.



147
148
149
150
151
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 147

def decision_function(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  x.dot(@weight_vec.transpose) + @bias_term
end

#fit(x, y) ⇒ SGDClassifier

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::Int32)

    (shape: [n_samples]) The labels to be used for fitting the model.

Returns:



105
106
107
108
109
110
111
112
113
114
115
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 105

def fit(x, y)
  x = Rumale::Validation.check_convert_sample_array(x)
  y = Rumale::Validation.check_convert_label_array(y)
  Rumale::Validation.check_sample_size(x, y)

  @classes = Numo::Int32[*y.to_a.uniq.sort]

  send(:"fit_#{@loss_func.name}", x, y)

  self
end

#partial_fit(x, y) ⇒ SGDClassifier

Perform 1-epoch of stochastic gradient descent optimization with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::Int32)

    (shape: [n_samples]) The binary labels to be used for fitting the model.

Returns:



122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 122

def partial_fit(x, y)
  x = Rumale::Validation.check_convert_sample_array(x)
  y = Rumale::Validation.check_convert_label_array(y)
  Rumale::Validation.check_sample_size(x, y)

  n_features = x.shape[1]
  n_features += 1 if fit_bias?
  need_init = @weight.nil? || @weight.shape[0] != n_features

  @classes = Numo::Int32[*y.to_a.uniq.sort] if need_init
  negative_label = @classes[0]
  bin_y = Numo::Int32.cast(y.ne(negative_label)) * 2 - 1

  @weight_vec, @bias_term = partial_fit_(x, bin_y, max_iter: 1, init: need_init)
  if @loss_func.name == Rumale::LinearModel::Loss::HingeLoss::NAME
    @prob_param = Rumale::ProbabilisticOutput.fit_sigmoid(x.dot(@weight_vec.transpose) + @bias_term, bin_y)
  end

  self
end

#predict(x) ⇒ Numo::Int32

Predict class labels for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the labels.

Returns:

  • (Numo::Int32)

    (shape: [n_samples]) Predicted class label per sample.



157
158
159
160
161
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 157

def predict(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  send(:"predict_#{@loss_func.name}", x)
end

#predict_proba(x) ⇒ Numo::DFloat

Predict probability for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the probailities.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_classes]) Predicted probability of each class per sample.



167
168
169
170
171
# File 'rumale-linear_model/lib/rumale/linear_model/sgd_classifier.rb', line 167

def predict_proba(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  send(:"predict_proba_#{@loss_func.name}", x)
end