SciPy

scipy.optimize.show_options

scipy.optimize.show_options(solver=None, method=None)[source]

Show documentation for additional options of optimization solvers.

These are method-specific options that can be supplied through the options dict.

Parameters:

solver : str

Type of optimization solver. One of ‘minimize’, ‘minimize_scalar’, ‘root’, or ‘linprog’.

method : str, optional

If not given, shows all methods of the specified solver. Otherwise, show only the options for the specified method. Valid values corresponds to methods’ names of respective solver (e.g. ‘BFGS’ for ‘minimize’).

Notes

Minimize options

BFGS options:

gtol : float
Gradient norm must be less than gtol before successful termination.
norm : float
Order of norm (Inf is max, -Inf is min).
eps : float or ndarray
If jac is approximated, use this value for the step size.

Nelder-Mead options:

xtol : float
Relative error in solution xopt acceptable for convergence.
ftol : float
Relative error in fun(xopt) acceptable for convergence.
maxfev : int
Maximum number of function evaluations to make.

Newton-CG options:

xtol : float
Average relative error in solution xopt acceptable for convergence.
eps : float or ndarray
If jac is approximated, use this value for the step size.

CG options:

gtol : float
Gradient norm must be less than gtol before successful termination.
norm : float
Order of norm (Inf is max, -Inf is min).
eps : float or ndarray
If jac is approximated, use this value for the step size.

Powell options:

xtol : float
Relative error in solution xopt acceptable for convergence.
ftol : float
Relative error in fun(xopt) acceptable for convergence.
maxfev : int
Maximum number of function evaluations to make.
direc : ndarray
Initial set of direction vectors for the Powell method.

Anneal options:

ftol : float
Relative error in fun(x) acceptable for convergence.
schedule : str
Annealing schedule to use. One of: ‘fast’, ‘cauchy’ or ‘boltzmann’.
T0 : float
Initial Temperature (estimated as 1.2 times the largest cost-function deviation over random points in the range).
Tf : float
Final goal temperature.
maxfev : int
Maximum number of function evaluations to make.
maxaccept : int
Maximum changes to accept.
boltzmann : float
Boltzmann constant in acceptance test (increase for less stringent test at each temperature).
learn_rate : float
Scale constant for adjusting guesses.
quench, m, n : float
Parameters to alter fast_sa schedule.
lower, upper : float or ndarray
Lower and upper bounds on x.
dwell : int
The number of times to search the space at each temperature.

L-BFGS-B options:

ftol : float
The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.
gtol : float
The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.
eps : float or ndarray
If jac is approximated, use this value for the step size.
maxcor : int
The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)
maxfun : int
Maximum number of function evaluations.
maxiter : int
Maximum number of iterations.

TNC options:

ftol : float
Precision goal for the value of f in the stoping criterion. If ftol < 0.0, ftol is set to 0.0 defaults to -1.
xtol : float
Precision goal for the value of x in the stopping criterion (after applying x scaling factors). If xtol < 0.0, xtol is set to sqrt(machine_precision). Defaults to -1.
gtol : float
Precision goal for the value of the projected gradient in the stopping criterion (after applying x scaling factors). If gtol < 0.0, gtol is set to 1e-2 * sqrt(accuracy). Setting it to 0.0 is not recommended. Defaults to -1.
scale : list of floats
Scaling factors to apply to each variable. If None, the factors are up-low for interval bounded variables and 1+|x] fo the others. Defaults to None
offset : float
Value to subtract from each variable. If None, the offsets are (up+low)/2 for interval bounded variables and x for the others.
maxCGit : int
Maximum number of hessian*vector evaluations per main iteration. If maxCGit == 0, the direction chosen is -gradient if maxCGit < 0, maxCGit is set to max(1,min(50,n/2)). Defaults to -1.
maxiter : int
Maximum number of function evaluation. if None, maxiter is set to max(100, 10*len(x0)). Defaults to None.
eta : float
Severity of the line search. if < 0 or > 1, set to 0.25. Defaults to -1.
stepmx : float
Maximum step for the line search. May be increased during call. If too small, it will be set to 10.0. Defaults to 0.
accuracy : float
Relative precision for finite difference calculations. If <= machine_precision, set to sqrt(machine_precision). Defaults to 0.
minfev : float
Minimum function value estimate. Defaults to 0.
rescale : float
Scaling factor (in log10) used to trigger f value rescaling. If 0, rescale at each iteration. If a large value, never rescale. If < 0, rescale is set to 1.3.

COBYLA options:

tol : float
Final accuracy in the optimization (not precisely guaranteed). This is a lower bound on the size of the trust region.
rhobeg : float
Reasonable initial changes to the variables.
maxfev : int
Maximum number of function evaluations.
catol : float
Absolute tolerance for constraint violations (default: 1e-6).

SLSQP options:

ftol : float
Precision goal for the value of f in the stopping criterion.
eps : float
Step size used for numerical approximation of the jacobian.
maxiter : int
Maximum number of iterations.

dogleg options:

initial_trust_radius : float
Initial trust-region radius.
max_trust_radius : float
Maximum value of the trust-region radius. No steps that are longer than this value will be proposed.
eta : float
Trust region related acceptance stringency for proposed steps.
gtol : float
Gradient norm must be less than gtol before successful termination.

trust-ncg options:

See dogleg options.

minimize_scalar options

brent options:

xtol : float

Relative error in solution xopt acceptable for convergence.

bounded options:

xatol : float
Absolute error in solution xopt acceptable for convergence.

golden options:

xtol : float
Relative error in solution xopt acceptable for convergence.

root options

hybrd options:

col_deriv : bool
Specify whether the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation).
xtol : float
The calculation will terminate if the relative error between two consecutive iterates is at most xtol.
maxfev : int
The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0.
band : sequence
If set to a two-sequence containing the number of sub- and super-diagonals within the band of the Jacobi matrix, the Jacobi matrix is considered banded (only for fprime=None).
epsfcn : float
A suitable step length for the forward-difference approximation of the Jacobian (for fprime=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision.
factor : float
A parameter determining the initial step bound (factor * || diag * x||). Should be in the interval (0.1, 100).
diag : sequence
N positive entries that serve as a scale factors for the variables.

LM options:

col_deriv : bool
non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation).
ftol : float
Relative error desired in the sum of squares.
xtol : float
Relative error desired in the approximate solution.
gtol : float
Orthogonality desired between the function vector and the columns of the Jacobian.
maxiter : int
The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0.
epsfcn : float
A suitable step length for the forward-difference approximation of the Jacobian (for Dfun=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision.
factor : float
A parameter determining the initial step bound (factor * || diag * x||). Should be in interval (0.1, 100).
diag : sequence
N positive entries that serve as a scale factors for the variables.

Broyden1 options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional
Options for the respective Jacobian approximation.
alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
reduction_method : str or tuple, optional

Method used in ensuring that the rank of the Broyden matrix stays low. Can either be a string giving the name of the method, or a tuple of the form (method, param1, param2, ...) that gives the name of the method and values for additional parameters.

Methods available:
  • restart: drop all matrix columns. Has no

    extra parameters.

  • simple: drop oldest matrix column. Has no

    extra parameters.

  • svd: keep only the most significant SVD

    components.

    Extra parameters:
    • ``to_retain`: number of SVD components to

      retain when rank reduction is done. Default is max_rank - 2.

max_rank : int, optional
Maximum rank for the Broyden matrix. Default is infinity (ie., no rank reduction).

Broyden2 options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
reduction_method : str or tuple, optional

Method used in ensuring that the rank of the Broyden matrix stays low. Can either be a string giving the name of the method, or a tuple of the form (method, param1, param2, ...) that gives the name of the method and values for additional parameters.

Methods available:
  • restart: drop all matrix columns. Has no

    extra parameters.

  • simple: drop oldest matrix column. Has no

    extra parameters.

  • svd: keep only the most significant SVD

    components.

    Extra parameters:
    • ``to_retain`: number of SVD components to

      retain when rank reduction is done. Default is max_rank - 2.

max_rank : int, optional
Maximum rank for the Broyden matrix. Default is infinity (ie., no rank reduction).

Anderson options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
M : float, optional
Number of previous vectors to retain. Defaults to 5.
w0 : float, optional
Regularization parameter for numerical stability. Compared to unity, good values of the order of 0.01.

LinearMixing options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
initial guess for the jacobian is (-1/alpha).

DiagBroyden options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
initial guess for the jacobian is (-1/alpha).

ExcitingMixing options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial Jacobian approximation is (-1/alpha).
alphamax : float, optional
The entries of the diagonal Jacobian are kept in the range [alpha, alphamax].

Krylov options:

nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

rdiff : float, optional
Relative step size to use in numerical differentiation.
method : {‘lgmres’, ‘gmres’, ‘bicgstab’, ‘cgs’, ‘minres’} or function

Krylov method to use to approximate the Jacobian. Can be a string, or a function implementing the same interface as the iterative solvers in scipy.sparse.linalg.

The default is scipy.sparse.linalg.lgmres.

inner_M : LinearOperator or InverseJacobian

Preconditioner for the inner Krylov iteration. Note that you can use also inverse Jacobians as (adaptive) preconditioners. For example,

>>> jac = BroydenFirst()
>>> kjac = KrylovJacobian(inner_M=jac.inverse).

If the preconditioner has a method named ‘update’, it will be called as update(x, f) after each nonlinear step, with x giving the current point, and f the current function value.

inner_tol, inner_maxiter, ...
Parameters to pass on to the “inner” Krylov solver. See scipy.sparse.linalg.gmres for details.
outer_k : int, optional

Size of the subspace kept across LGMRES nonlinear iterations.

See scipy.sparse.linalg.lgmres for details.

linprog options

simplex options:

maxiter : int, optional
Maximum number of iterations to make.
tol : float, optional
The tolerance which determines when the Phase 1 objective is sufficiently close to zero to be considered a basic feasible solution or when the Phase 2 objective coefficients are close enough to positive for the objective to be considered optimal.
bland : bool, optional
If True, choose pivots using Bland’s rule. In problems which fail to converge due to cycling, using Bland’s rule can provide convergence at the expense of a less optimal path about the simplex.