minimize(method=’L-BFGS-B’)¶
- scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxls': 20, 'iprint': -1, 'gtol': 1e-05, 'eps': 1e-08, 'maxiter': 15000, 'ftol': 2.220446049250313e-09, 'maxcor': 10, 'maxfun': 15000})
Minimize a scalar function of one or more variables using the L-BFGS-B algorithm.
See also
For documentation for the rest of the parameters, see scipy.optimize.minimize
Options: disp : bool
Set to True to print convergence messages.
maxcor : int
The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)
factr : float
The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps, where eps is the machine precision, which is automatically generated by the code. Typical values for factr are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy.
ftol : float
The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.
gtol : float
The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.
eps : float
Step size used for numerical approximation of the jacobian.
disp : int
Set to True to print convergence messages.
maxfun : int
Maximum number of function evaluations.
maxiter : int
Maximum number of iterations.
maxls : int, optional
Maximum number of line search steps (per iteration). Default is 20.