scipy.optimize.fmin_bfgs#
- scipy.optimize.fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-05, norm=inf, epsilon=1.4901161193847656e-08, maxiter=None, full_output=0, disp=1, retall=0, callback=None, xrtol=0)[source]#
Minimize a function using the BFGS algorithm.
- Parameters:
- fcallable
f(x,*args)
Objective function to be minimized.
- x0ndarray
Initial guess.
- fprimecallable
f'(x,*args)
, optional Gradient of f.
- argstuple, optional
Extra arguments passed to f and fprime.
- gtolfloat, optional
Terminate successfully if gradient norm is less than gtol
- normfloat, optional
Order of norm (Inf is max, -Inf is min)
- epsilonint or ndarray, optional
If fprime is approximated, use this value for the step size.
- callbackcallable, optional
An optional user-supplied function to call after each iteration. Called as
callback(xk)
, wherexk
is the current parameter vector.- maxiterint, optional
Maximum number of iterations to perform.
- full_outputbool, optional
If True, return
fopt
,func_calls
,grad_calls
, andwarnflag
in addition toxopt
.- dispbool, optional
Print convergence message if True.
- retallbool, optional
Return a list of results at each iteration if True.
- xrtolfloat, default: 0
Relative tolerance for x. Terminate successfully if step size is less than
xk * xrtol
wherexk
is the current parameter vector.
- fcallable
- Returns:
- xoptndarray
Parameters which minimize f, i.e.,
f(xopt) == fopt
.- foptfloat
Minimum value.
- goptndarray
Value of gradient at minimum, f’(xopt), which should be near 0.
- Boptndarray
Value of 1/f’’(xopt), i.e., the inverse Hessian matrix.
- func_callsint
Number of function_calls made.
- grad_callsint
Number of gradient calls made.
- warnflaginteger
1 : Maximum number of iterations exceeded. 2 : Gradient and/or function calls not changing. 3 : NaN result encountered.
- allvecslist
The value of xopt at each iteration. Only returned if retall is True.
See also
minimize
Interface to minimization algorithms for multivariate functions. See
method='BFGS'
in particular.
Notes
Optimize the function, f, whose gradient is given by fprime using the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS).
References
Wright, and Nocedal ‘Numerical Optimization’, 1999, p. 198.
Examples
>>> import numpy as np >>> from scipy.optimize import fmin_bfgs >>> def quadratic_cost(x, Q): ... return x @ Q @ x ... >>> x0 = np.array([-3, -4]) >>> cost_weight = np.diag([1., 10.]) >>> # Note that a trailing comma is necessary for a tuple with single element >>> fmin_bfgs(quadratic_cost, x0, args=(cost_weight,)) Optimization terminated successfully. Current function value: 0.000000 Iterations: 7 # may vary Function evaluations: 24 # may vary Gradient evaluations: 8 # may vary array([ 2.85169950e-06, -4.61820139e-07])
>>> def quadratic_cost_grad(x, Q): ... return 2 * Q @ x ... >>> fmin_bfgs(quadratic_cost, x0, quadratic_cost_grad, args=(cost_weight,)) Optimization terminated successfully. Current function value: 0.000000 Iterations: 7 Function evaluations: 8 Gradient evaluations: 8 array([ 2.85916637e-06, -4.54371951e-07])