SciPy

minimize(method=’L-BFGS-B’)

scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxcor': 10, 'ftol': 2.220446049250313e-09, 'gtol': 1e-05, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': - 1, 'maxls': 20, 'finite_diff_rel_step': None})

Minimize a scalar function of one or more variables using the L-BFGS-B algorithm.

See also

For documentation for the rest of the parameters, see scipy.optimize.minimize

Options
dispNone or int

If disp is None (the default), then the supplied version of iprint is used. If disp is not None, then it overrides the supplied version of iprint with the behaviour you outlined.

maxcorint

The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)

ftolfloat

The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.

gtolfloat

The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.

epsfloat or ndarray

If jac is None the absolute step size used for numerical approximation of the jacobian via forward differences.

maxfunint

Maximum number of function evaluations.

maxiterint

Maximum number of iterations.

iprintint, optional

Controls the frequency of output. iprint < 0 means no output; iprint = 0 print only one line at the last iteration; 0 < iprint < 99 print also f and |proj g| every iprint iterations; iprint = 99 print details of every iteration except n-vectors; iprint = 100 print also the changes of active set and final x; iprint > 100 print details of every iteration including x and g.

callbackcallable, optional

Called after each iteration, as callback(xk), where xk is the current parameter vector.

maxlsint, optional

Maximum number of line search steps (per iteration). Default is 20.

finite_diff_rel_stepNone or array_like, optional

If jac in [‘2-point’, ‘3-point’, ‘cs’] the relative step size to use for numerical approximation of the jacobian. The absolute step size is computed as h = rel_step * sign(x0) * max(1, abs(x0)), possibly adjusted to fit into the bounds. For method='3-point' the sign of h is ignored. If None (default) then step is selected automatically.

Notes

The option ftol is exposed via the scipy.optimize.minimize interface, but calling scipy.optimize.fmin_l_bfgs_b directly exposes factr. The relationship between the two is ftol = factr * numpy.finfo(float).eps. I.e., factr multiplies the default machine floating-point precision to arrive at ftol.

Previous topic

minimize(method=’Newton-CG’)

Next topic

minimize(method=’TNC’)