Minimize a function with variables subject to bounds, using gradient information in a truncated Newton algorithm. This method wraps a C implementation of the algorithm.
Parameters : | func : callable func(x, *args)
x0 : list of floats
fprime : callable fprime(x, *args)
args : tuple
approx_grad : bool
bounds : list
epsilon : float
scale : list of floats
offset : float
messages : :
disp : int
maxCGit : int
maxfun : int
eta : float
stepmx : float
accuracy : float
fmin : float
ftol : float
xtol : float
pgtol : float
rescale : float
callback : callable, optional
|
---|---|
Returns : | x : list of floats
nfeval : int
rc : int
|
See also
Notes
The underlying algorithm is truncated Newton, also called Newton Conjugate-Gradient. This method differs from scipy.optimize.fmin_ncg in that
The algorithm incoporates the bound constraints by determining the descent direction as in an unconstrained truncated Newton, but never taking a step-size large enough to leave the space of feasible x’s. The algorithm keeps track of a set of currently active constraints, and ignores them when computing the minimum allowable step size. (The x’s associated with the active constraint are kept fixed.) If the maximum allowable step size is zero then a new constraint is added. At the end of each iteration one of the constraints may be deemed no longer active and removed. A constraint is considered no longer active is if it is currently active but the gradient for that variable points inward from the constraint. The specific constraint removed is the one associated with the variable of largest index whose constraint is no longer active.
References
Wright S., Nocedal J. (2006), ‘Numerical Optimization’
Nash S.G. (1984), “Newton-Type Minimization Via the Lanczos Method”, SIAM Journal of Numerical Analysis 21, pp. 770-778