Class: Rumale::NeuralNetwork::MLPClassifier

Inherits:
BaseMLP show all
Includes:
Base::Classifier
Defined in:
rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb

Overview

MLPClassifier is a class that implements classifier based on multi-layer perceptron. MLPClassifier use ReLu as the activation function and Adam as the optimization method and softmax cross entropy as the loss function.

Examples:

require 'rumale/neural_network/mlp_classifier'

estimator = Rumale::NeuralNetwork::MLPClassifier.new(hidden_units: [100, 100], dropout_rate: 0.3)
estimator.fit(training_samples, traininig_labels)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes inherited from Base::Estimator

#params

Instance Method Summary collapse

Methods included from Base::Classifier

#score

Constructor Details

#initialize(hidden_units: [128, 128], dropout_rate: 0.4, learning_rate: 0.001, decay1: 0.9, decay2: 0.999, max_iter: 200, batch_size: 50, tol: 1e-4, verbose: false, random_seed: nil) ⇒ MLPClassifier

Create a new classifier with multi-layer preceptron.

Parameters:

  • hidden_units (Array) (defaults to: [128, 128])

    The number of units in the i-th hidden layer.

  • dropout_rate (Float) (defaults to: 0.4)

    The rate of the units to drop.

  • learning_rate (Float) (defaults to: 0.001)

    The initial value of learning rate in Adam optimizer.

  • decay1 (Float) (defaults to: 0.9)

    The smoothing parameter for the first moment in Adam optimizer.

  • decay2 (Float) (defaults to: 0.999)

    The smoothing parameter for the second moment in Adam optimizer.

  • max_iter (Integer) (defaults to: 200)

    The maximum number of epochs that indicates how many times the whole data is given to the training process.

  • batch_size (Intger) (defaults to: 50)

    The size of the mini batches.

  • tol (Float) (defaults to: 1e-4)

    The tolerance of loss for terminating optimization.

  • verbose (Boolean) (defaults to: false)

    The flag indicating whether to output loss during iteration.

  • random_seed (Integer) (defaults to: nil)

    The seed value using to initialize the random generator.



52
53
54
55
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 52

def initialize(hidden_units: [128, 128], dropout_rate: 0.4, learning_rate: 0.001, decay1: 0.9, decay2: 0.999,
               max_iter: 200, batch_size: 50, tol: 1e-4, verbose: false, random_seed: nil)
  super
end

Instance Attribute Details

#classesNumo::Int32 (readonly)

Return the class labels.

Returns:

  • (Numo::Int32)

    (size: n_classes)



29
30
31
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 29

def classes
  @classes
end

#n_iterInteger (readonly)

Return the number of iterations run for optimization

Returns:

  • (Integer)


33
34
35
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 33

def n_iter
  @n_iter
end

#networkRumale::NeuralNetwork::Model::Sequential (readonly)

Return the network.

Returns:

  • (Rumale::NeuralNetwork::Model::Sequential)


25
26
27
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 25

def network
  @network
end

#rngRandom (readonly)

Return the random generator.

Returns:

  • (Random)


37
38
39
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 37

def rng
  @rng
end

Instance Method Details

#fit(x, y) ⇒ MLPClassifier

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::Int32)

    (shape: [n_samples]) The labels to be used for fitting the model.

Returns:



62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 62

def fit(x, y)
  x = ::Rumale::Validation.check_convert_sample_array(x)
  y = ::Rumale::Validation.check_convert_label_array(y)
  ::Rumale::Validation.check_sample_size(x, y)

  @classes = Numo::Int32[*y.to_a.uniq.sort]
  n_labels = @classes.size
  n_features = x.shape[1]
  sub_rng = @rng.dup

  loss = ::Rumale::NeuralNetwork::Loss::SoftmaxCrossEntropy.new
  @network = buld_network(n_features, n_labels, sub_rng)
  @network = train(x, one_hot_encode(y), @network, loss, sub_rng)
  @network.delete_dropout

  self
end

#predict(x) ⇒ Numo::Int32

Predict class labels for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the labels.

Returns:

  • (Numo::Int32)

    (shape: [n_samples]) Predicted class label per sample.



84
85
86
87
88
89
90
91
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 84

def predict(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  n_samples = x.shape[0]
  decision_values = predict_proba(x)
  predicted = Array.new(n_samples) { |n| @classes[decision_values[n, true].max_index] }
  Numo::Int32.asarray(predicted)
end

#predict_proba(x) ⇒ Numo::DFloat

Predict probability for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the probailities.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_classes]) Predicted probability of each class per sample.



97
98
99
100
101
102
# File 'rumale-neural_network/lib/rumale/neural_network/mlp_classifier.rb', line 97

def predict_proba(x)
  x = ::Rumale::Validation.check_convert_sample_array(x)

  out, = @network.forward(x)
  softmax(out)
end