scipy.optimize.fmin_cg¶

scipy.optimize.
fmin_cg
(f, x0, fprime=None, args=(), gtol=1e05, norm=inf, epsilon=1.4901161193847656e08, maxiter=None, full_output=0, disp=1, retall=0, callback=None)[source]¶ Minimize a function using a nonlinear conjugate gradient algorithm.
Parameters: f : callable,
f(x, *args)
Objective function to be minimized. Here x must be a 1D array of the variables that are to be changed in the search for a minimum, and
args
are the other (fixed) parameters of f.x0 : ndarray
A usersupplied initial estimate of xopt, the optimal value of x. It must be a 1D array of values.
fprime : callable,
fprime(x, *args)
, optionalA function that returns the gradient of f at x. Here x and
args
are as described above for f. The returned value must be a 1D array. Defaults to None, in which case the gradient is approximated numerically (see epsilon, below).args : tuple, optional
Parameter values passed to f and fprime. Must be supplied whenever additional fixed parameters are needed to completely specify the functions f and fprime.
gtol : float, optional
Stop when the norm of the gradient is less than gtol.
norm : float, optional
Order to use for the norm of the gradient (
np.Inf
is min,np.Inf
is max).epsilon : float or ndarray, optional
Step size(s) to use when fprime is approximated numerically. Can be a scalar or a 1D array. Defaults to
sqrt(eps)
, with eps the floating point machine precision. Usuallysqrt(eps)
is about 1.5e8.maxiter : int, optional
Maximum number of iterations to perform. Default is
200 * len(x0)
.full_output : bool, optional
If True, return fopt, func_calls, grad_calls, and warnflag in addition to xopt. See the Returns section below for additional information on optional return values.
disp : bool, optional
If True, return a convergence message, followed by xopt.
retall : bool, optional
If True, add to the returned values the results of each iteration.
callback : callable, optional
An optional usersupplied function, called after each iteration. Called as
callback(xk)
, wherexk
is the current value of x0.Returns: xopt : ndarray
Parameters which minimize f, i.e.
f(xopt) == fopt
.fopt : float, optional
Minimum value found, f(xopt). Only returned if full_output is True.
func_calls : int, optional
The number of function_calls made. Only returned if full_output is True.
grad_calls : int, optional
The number of gradient calls made. Only returned if full_output is True.
warnflag : int, optional
Integer value with warning status, only returned if full_output is True.
0 : Success.
1 : The maximum number of iterations was exceeded.
 2 : Gradient and/or function calls were not changing. May indicate
that precision was lost, i.e., the routine did not converge.
allvecs : list of ndarray, optional
List of arrays, containing the results at each iteration. Only returned if retall is True.
See also
minimize
 common interface to all
scipy.optimize
algorithms for unconstrained and constrained minimization of multivariate functions. It provides an alternative way to callfmin_cg
, by specifyingmethod='CG'
.
Notes
This conjugate gradient algorithm is based on that of Polak and Ribiere [R200].
Conjugate gradient methods tend to work better when:
 f has a unique global minimizing point, and no local minima or other stationary points,
 f is, at least locally, reasonably well approximated by a quadratic function of the variables,
 f is continuous and has a continuous gradient,
 fprime is not too large, e.g., has a norm less than 1000,
 The initial guess, x0, is reasonably close to f ‘s global minimizing point, xopt.
References
[R200] (1, 2) Wright & Nocedal, “Numerical Optimization”, 1999, pp. 120122. Examples
Example 1: seek the minimum value of the expression
a*u**2 + b*u*v + c*v**2 + d*u + e*v + f
for given values of the parameters and an initial guess(u, v) = (0, 0)
.>>> args = (2, 3, 7, 8, 9, 10) # parameter values >>> def f(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f >>> def gradf(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... gu = 2*a*u + b*v + d # ucomponent of the gradient ... gv = b*u + 2*c*v + e # vcomponent of the gradient ... return np.asarray((gu, gv)) >>> x0 = np.asarray((0, 0)) # Initial guess. >>> from scipy import optimize >>> res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res1 array([1.80851064, 0.25531915])
Example 2: solve the same problem using the
minimize
function. (This myopts dictionary shows all of the available options, although in practice only nondefault values would be needed. The returned value will be a dictionary.)>>> opts = {'maxiter' : None, # default value. ... 'disp' : True, # nondefault value. ... 'gtol' : 1e5, # default value. ... 'norm' : np.inf, # default value. ... 'eps' : 1.4901161193847656e08} # default value. >>> res2 = optimize.minimize(f, x0, jac=gradf, args=args, ... method='CG', options=opts) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res2.x # minimum found array([1.80851064, 0.25531915])