SciPy

minimize(method=’L-BFGS-B’)

scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxcor': 10, 'ftol': 2.220446049250313e-09, 'gtol': 1e-05, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': -1, 'maxls': 20})

Minimize a scalar function of one or more variables using the L-BFGS-B algorithm.

See also

For documentation for the rest of the parameters, see scipy.optimize.minimize

Options:
disp : bool

Set to True to print convergence messages.

maxcor : int

The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)

ftol : float

The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.

gtol : float

The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.

eps : float

Step size used for numerical approximation of the jacobian.

disp : int

Set to True to print convergence messages.

maxfun : int

Maximum number of function evaluations.

maxiter : int

Maximum number of iterations.

maxls : int, optional

Maximum number of line search steps (per iteration). Default is 20.

Notes

The option ftol is exposed via the scipy.optimize.minimize interface, but calling scipy.optimize.fmin_l_bfgs_b directly exposes factr. The relationship between the two is ftol = factr * numpy.finfo(float).eps. I.e., factr multiplies the default machine floating-point precision to arrive at ftol.