Class: Rumale::LinearModel::ElasticNet

Inherits:
BaseEstimator show all
Includes:
Base::Regressor
Defined in:
rumale-linear_model/lib/rumale/linear_model/elastic_net.rb

Overview

ElasticNet is a class that implements Elastic-net Regression with cordinate descent optimization.

Reference

  • Friedman, J., Hastie, T., and Tibshirani, R., “Regularization Paths for Generalized Linear Models via Coordinate Descent,” Journal of Statistical Software, 33 (1), pp. 1–22, 2010.

  • Simon, N., Friedman, J., and Hastie, T., “A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression,” arXiv preprint arXiv:1311.6529, 2013.

Examples:

require 'rumale/linear_model/elastic_net'

estimator = Rumale::LinearModel::ElasticNet.new(reg_param: 0.1, l1_ratio: 0.5)
estimator.fit(training_samples, traininig_values)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes inherited from BaseEstimator

#bias_term, #weight_vec

Attributes inherited from Base::Estimator

#params

Instance Method Summary collapse

Methods included from Base::Regressor

#score

Constructor Details

#initialize(reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, max_iter: 1000, tol: 1e-4) ⇒ ElasticNet

Create a new Elastic-net regressor.

Parameters:

  • reg_param (Float) (defaults to: 1.0)

    The regularization parameter.

  • l1_ratio (Float) (defaults to: 0.5)

    The elastic-net mixing parameter. If l1_ratio = 1, the regularization is similar to Lasso. If l1_ratio = 0, the regularization is similar to Ridge. If 0 < l1_ratio < 1, the regularization is a combination of L1 and L2.

  • fit_bias (Boolean) (defaults to: true)

    The flag indicating whether to fit the bias term.

  • bias_scale (Float) (defaults to: 1.0)

    The scale of the bias term.

  • max_iter (Integer) (defaults to: 1000)

    The maximum number of epochs that indicates how many times the whole data is given to the training process.

  • tol (Float) (defaults to: 1e-4)

    The tolerance of loss for terminating optimization.



42
43
44
45
46
47
48
49
50
51
52
# File 'rumale-linear_model/lib/rumale/linear_model/elastic_net.rb', line 42

def initialize(reg_param: 1.0, l1_ratio: 0.5, fit_bias: true, bias_scale: 1.0, max_iter: 1000, tol: 1e-4)
  super()
  @params = {
    reg_param: reg_param,
    l1_ratio: l1_ratio,
    fit_bias: fit_bias,
    bias_scale: bias_scale,
    max_iter: max_iter,
    tol: tol
  }
end

Instance Attribute Details

#n_iterInteger (readonly)

Return the number of iterations performed in coordinate descent optimization.

Returns:

  • (Integer)


28
29
30
# File 'rumale-linear_model/lib/rumale/linear_model/elastic_net.rb', line 28

def n_iter
  @n_iter
end

Instance Method Details

#fit(x, y) ⇒ ElasticNet

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::DFloat)

    (shape: [n_samples, n_outputs]) The target values to be used for fitting the model.

Returns:



59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# File 'rumale-linear_model/lib/rumale/linear_model/elastic_net.rb', line 59

def fit(x, y)
  x = Rumale::Validation.check_convert_sample_array(x)
  y = Rumale::Validation.check_convert_target_value_array(y)
  Rumale::Validation.check_sample_size(x, y)

  @n_iter = 0
  x = expand_feature(x) if fit_bias?

  @weight_vec, @bias_term = if single_target?(y)
                              partial_fit(x, y)
                            else
                              partial_fit_multi(x, y)
                            end

  self
end

#predict(x) ⇒ Numo::DFloat

Predict values for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the values.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_outputs]) Predicted values per sample.



80
81
82
83
84
# File 'rumale-linear_model/lib/rumale/linear_model/elastic_net.rb', line 80

def predict(x)
  x = Rumale::Validation.check_convert_sample_array(x)

  x.dot(@weight_vec.transpose) + @bias_term
end