scipy.optimize.SR1¶
-
class
scipy.optimize.
SR1
(min_denominator=1e-08, init_scale='auto')[source]¶ Symmetric-rank-1 Hessian update strategy.
- Parameters
- min_denominatorfloat
This number, scaled by a normalization factor, defines the minimum denominator magnitude allowed in the update. When the condition is violated we skip the update. By default uses
1e-8
.- init_scale{float, ‘auto’}, optional
Matrix scale at first iteration. At the first iteration the Hessian matrix or its inverse will be initialized with
init_scale*np.eye(n)
, wheren
is the problem dimension. Set it to ‘auto’ in order to use an automatic heuristic for choosing the initial scale. The heuristic is described in [Rf73631950f54-1], p.143. By default uses ‘auto’.
Notes
The update is based on the description in [Rf73631950f54-1], p.144-146.
References
- Rf73631950f54-1(1,2)
Nocedal, Jorge, and Stephen J. Wright. “Numerical optimization” Second Edition (2006).
Methods
dot
(self, p)Compute the product of the internal matrix with the given vector.
get_matrix
(self)Return the current internal matrix.
initialize
(self, n, approx_type)Initialize internal matrix.
update
(self, delta_x, delta_grad)Update internal matrix.