scipy.optimize.fmin_l_bfgs_b

scipy.optimize.fmin_l_bfgs_b(func, x0, fprime=None, args=(), approx_grad=0, bounds=None, m=10, factr=10000000.0, pgtol=1e-05, epsilon=1e-08, iprint=-1, maxfun=15000, disp=None)[source]

Minimize a function func using the L-BFGS-B algorithm.

Parameters :

func : callable f(x,*args)

Function to minimise.

x0 : ndarray

Initial guess.

fprime : callable fprime(x,*args)

The gradient of func. If None, then func returns the function value and the gradient (f, g = func(x, *args)), unless approx_grad is True in which case func returns only f.

args : sequence

Arguments to pass to func and fprime.

approx_grad : bool

Whether to approximate the gradient numerically (in which case func returns only the function value).

bounds : list

(min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction.

m : int

The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)

factr : float

The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps, where eps is the machine precision, which is automatically generated by the code. Typical values for factr are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy.

pgtol : float

The iteration will stop when max{|proj g_i | i = 1, ..., n} <= pgtol where pg_i is the i-th component of the projected gradient.

epsilon : float

Step size used when approx_grad is True, for numerically calculating the gradient

iprint : int

Controls the frequency of output. iprint < 0 means no output; iprint == 0 means write messages to stdout; iprint > 1 in addition means write logging information to a file named iterate.dat in the current working directory.

disp : int, optional

If zero, then no output. If a positive number, then this over-rides iprint (i.e., iprint gets the value of disp).

maxfun : int

Maximum number of function evaluations.

Returns :

x : array_like

Estimated position of the minimum.

f : float

Value of func at the minimum.

d : dict

Information dictionary.

  • d[‘warnflag’] is
    • 0 if converged,
    • 1 if too many function evaluations,
    • 2 if stopped for another reason, given in d[‘task’]
  • d[‘grad’] is the gradient at the minimum (should be 0 ish)
  • d[‘funcalls’] is the number of function calls made.

See also

minimize
Interface to minimization algorithms for multivariate functions. See the ‘L-BFGS-B’ method in particular.

Notes

License of L-BFGS-B (Fortran code):

The version included here (in fortran code) is 3.0 (released April 25, 2011). It was written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal <nocedal@ece.nwu.edu>. It carries the following condition for use:

This software is freely available, but we expect that all publications describing work using this software, or all commercial products using it, quote at least one of the references given below. This software is released under the BSD License.

References

  • R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound Constrained Optimization, (1995), SIAM Journal on Scientific and Statistical Computing, 16, 5, pp. 1190-1208.
  • C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization (1997), ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.
  • J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization (2011), ACM Transactions on Mathematical Software, 38, 1.

Previous topic

scipy.optimize.leastsq

Next topic

scipy.optimize.fmin_tnc