Class: Mopti::ScaledConjugateGradient
- Inherits:
-
Object
- Object
- Mopti::ScaledConjugateGradient
- Includes:
- Enumerable
- Defined in:
- lib/mopti/scaled_conjugate_gradient.rb
Overview
ScaledConjugateGradient is a class that implements multivariate optimization using scaled conjugate gradient method.
Reference
-
Moller, M F., “A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning,” Neural Networks, Vol. 6, pp. 525–533, 1993.
Instance Method Summary collapse
-
#each(&block) ⇒ Enumerator
Iteration for optimization.
-
#initialize(fnc:, jcb:, x_init:, args: nil, max_iter: 200, xtol: 1e-6, ftol: 1e-8, jtol: 1e-7) ⇒ ScaledConjugateGradient
constructor
Create a new optimizer with scaled conjugate gradient.
Constructor Details
#initialize(fnc:, jcb:, x_init:, args: nil, max_iter: 200, xtol: 1e-6, ftol: 1e-8, jtol: 1e-7) ⇒ ScaledConjugateGradient
Create a new optimizer with scaled conjugate gradient.
59 60 61 62 63 64 65 66 67 68 |
# File 'lib/mopti/scaled_conjugate_gradient.rb', line 59 def initialize(fnc:, jcb:, x_init:, args: nil, max_iter: 200, xtol: 1e-6, ftol: 1e-8, jtol: 1e-7) @fnc = fnc @jcb = jcb @x_init = x_init @args = args @max_iter = max_iter @ftol = ftol @jtol = jtol @xtol = xtol end |
Instance Method Details
#each(&block) ⇒ Enumerator
Iteration for optimization.
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
# File 'lib/mopti/scaled_conjugate_gradient.rb', line 81 def each return to_enum(__method__) unless block_given? x = @x_init f_prev = func(x, @args) n_fev = 1 f_curr = f_prev j_next = jacb(x, @args) n_jev = 1 j_curr = j_next.dot(j_next) j_prev = j_next.dup d = -j_next success = true n_successes = 0 beta = 1.0 n_iter = 0 while n_iter < @max_iter if success mu = d.dot(j_next) if mu >= 0.0 d = -j_next mu = d.dot(j_next) end kappa = d.dot(d) break if kappa < 1e-16 sigma = SIGMA_INIT / Math.sqrt(kappa) x_plus = x + sigma * d j_plus = jacb(x_plus, @args) n_jev += 1 theta = d.dot(j_plus - j_next) / sigma end delta = theta + beta * kappa if delta <= 0 delta = beta * kappa # TODO: Investigate the cause of the type error. # Cannot assign a value of type `::Complex` to a variable of type `::Float` # beta -= theta / kappa beta = (beta - (theta / kappa)).to_f end alpha = -mu / delta x_next = x + alpha * d f_next = func(x_next, @args) n_fev += 1 delta = 2 * (f_next - f_prev) / (alpha * mu) if delta >= 0 success = true n_successes += 1 x = x_next f_curr = f_next else success = false f_curr = f_prev end n_iter += 1 yield({ x: x, n_fev: n_fev, n_jev: n_jev, n_iter: n_iter, fnc: f_curr, jcb: j_curr }) if success break if (f_next - f_prev).abs < @ftol break if (alpha * d).abs.max < @xtol f_prev = f_next j_prev = j_next j_next = jacb(x, @args) n_jev += 1 j_curr = j_next.dot(j_next) break if j_curr <= @jtol end beta = beta * 4 < BETA_MAX ? beta * 4 : BETA_MAX if delta < 0.25 beta = beta / 4 > BETA_MIN ? beta / 4 : BETA_MIN if delta > 0.75 if n_successes == x.size d = -j_next beta = 1.0 n_successes = 0 elsif success gamma = (j_prev - j_next).dot(j_next) / mu d = -j_next + gamma * d end end end |