Class: Rumale::NeuralNetwork::MLPRegressor

Inherits:
BaseMLP show all
Includes:
Base::Regressor
Defined in:
rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb

Overview

MLPRegressor is a class that implements regressor based on multi-layer perceptron. MLPRegressor use ReLu as the activation function and Adam as the optimization method and mean squared error as the loss function.

Examples:

require 'rumale/neural_network/mlp_regressor'

estimator = Rumale::NeuralNetwork::MLPRegressor.new(hidden_units: [100, 100], dropout_rate: 0.3)
estimator.fit(training_samples, traininig_labels)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes inherited from Base::Estimator

#params

Instance Method Summary collapse

Methods included from Base::Regressor

#score

Constructor Details

#initialize(hidden_units: [128, 128], dropout_rate: 0.4, learning_rate: 0.001, decay1: 0.9, decay2: 0.999, max_iter: 200, batch_size: 50, tol: 1e-4, verbose: false, random_seed: nil) ⇒ MLPRegressor

Create a new regressor with multi-layer perceptron.

Parameters:

  • hidden_units (Array) (defaults to: [128, 128])

    The number of units in the i-th hidden layer.

  • dropout_rate (Float) (defaults to: 0.4)

    The rate of the units to drop.

  • learning_rate (Float) (defaults to: 0.001)

    The initial value of learning rate in Adam optimizer.

  • decay1 (Float) (defaults to: 0.9)

    The smoothing parameter for the first moment in Adam optimizer.

  • decay2 (Float) (defaults to: 0.999)

    The smoothing parameter for the second moment in Adam optimizer.

  • max_iter (Integer) (defaults to: 200)

    The maximum number of epochs that indicates how many times the whole data is given to the training process.

  • batch_size (Intger) (defaults to: 50)

    The size of the mini batches.

  • tol (Float) (defaults to: 1e-4)

    The tolerance of loss for terminating optimization.

  • verbose (Boolean) (defaults to: false)

    The flag indicating whether to output loss during iteration.

  • random_seed (Integer) (defaults to: nil)

    The seed value using to initialize the random generator.



47
48
49
50
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 47

def initialize(hidden_units: [128, 128], dropout_rate: 0.4, learning_rate: 0.001, decay1: 0.9, decay2: 0.999,
               max_iter: 200, batch_size: 50, tol: 1e-4, verbose: false, random_seed: nil)
  super
end

Instance Attribute Details

#n_iterInteger (readonly)

Return the number of iterations run for optimization

Returns:

  • (Integer)


28
29
30
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 28

def n_iter
  @n_iter
end

#networkRumale::NeuralNetwork::Model::Sequential (readonly)

Return the network.

Returns:

  • (Rumale::NeuralNetwork::Model::Sequential)


24
25
26
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 24

def network
  @network
end

#rngRandom (readonly)

Return the random generator.

Returns:

  • (Random)


32
33
34
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 32

def rng
  @rng
end

Instance Method Details

#fit(x, y) ⇒ MLPRegressor

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::DFloat)

    (shape: [n_samples, n_outputs]) The taget values to be used for fitting the model.

Returns:



57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 57

def fit(x, y)
  x = ::Rumale::Validation.check_convert_sample_array(x)
  y = ::Rumale::Validation.check_convert_target_value_array(y)
  ::Rumale::Validation.check_sample_size(x, y)

  y = y.expand_dims(1) if y.ndim == 1
  n_targets = y.shape[1]
  n_features = x.shape[1]
  sub_rng = @rng.dup

  loss = ::Rumale::NeuralNetwork::Loss::MeanSquaredError.new
  @network = buld_network(n_features, n_targets, sub_rng)
  @network = train(x, y, @network, loss, sub_rng)
  @network.delete_dropout

  self
end

#predict(x) ⇒ Numo::DFloat

Predict values for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the values.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_outputs]) Predicted values per sample.



79
80
81
82
83
84
85
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_regressor.rb', line 79

def predict(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  out, = @network.forward(x)
  out = out[true, 0] if out.shape[1] == 1
  out
end