Optimization and Root Finding (scipy.optimize
)
SciPy optimize
provides functions for minimizing (or maximizing)
objective functions, possibly subject to constraints. It includes
solvers for nonlinear problems (with support for both local and global
optimization algorithms), linear programing, constrained
and nonlinear least-squares, root finding and curve fitting.
Common functions and objects, shared across different solvers, are:
Optimization
Scalar Functions Optimization
minimize_scalar (fun[, bracket, bounds, …]) |
Minimization of scalar function of one variable. |
The minimize_scalar
function supports the following methods:
Local (Multivariate) Optimization
minimize (fun, x0[, args, method, jac, hess, …]) |
Minimization of scalar function of one or more variables. |
The minimize
function supports the following methods:
Constraints are passed to minimize
function as a single object or
as a list of objects from the following classes:
Simple bound constraints are handled separately and there is a special class
for them:
Bounds (lb, ub[, keep_feasible]) |
Bounds constraint on the variables. |
Quasi-Newton strategies implementing HessianUpdateStrategy
interface can be used to approximate the Hessian in minimize
function (available only for the ‘trust-constr’ method). Available
quasi-Newton methods implementing this interface are:
BFGS ([exception_strategy, min_curvature, …]) |
Broyden-Fletcher-Goldfarb-Shanno (BFGS) Hessian update strategy. |
SR1 ([min_denominator, init_scale]) |
Symmetric-rank-1 Hessian update strategy. |
Global Optimization
basinhopping (func, x0[, niter, T, stepsize, …]) |
Find the global minimum of a function using the basin-hopping algorithm |
brute (func, ranges[, args, Ns, full_output, …]) |
Minimize a function over a given range by brute force. |
differential_evolution (func, bounds[, args, …]) |
Finds the global minimum of a multivariate function. |
shgo (func, bounds[, args, constraints, n, …]) |
Finds the global minimum of a function using SHG optimization. |
dual_annealing (func, bounds[, args, …]) |
Find the global minimum of a function using Dual Annealing. |
Least-squares and Curve Fitting
Nonlinear Least-Squares
least_squares (fun, x0[, jac, bounds, …]) |
Solve a nonlinear least-squares problem with bounds on the variables. |
Linear Least-Squares
nnls (A, b[, maxiter]) |
Solve argmin_x || Ax - b ||_2 for x>=0 . |
lsq_linear (A, b[, bounds, method, tol, …]) |
Solve a linear least-squares problem with bounds on the variables. |
Curve Fitting
curve_fit (f, xdata, ydata[, p0, sigma, …]) |
Use non-linear least squares to fit a function, f, to data. |
Root finding
Scalar functions
root_scalar (f[, args, method, bracket, …]) |
Find a root of a scalar function. |
brentq (f, a, b[, args, xtol, rtol, maxiter, …]) |
Find a root of a function in a bracketing interval using Brent’s method. |
brenth (f, a, b[, args, xtol, rtol, maxiter, …]) |
Find a root of a function in a bracketing interval using Brent’s method with hyperbolic extrapolation. |
ridder (f, a, b[, args, xtol, rtol, maxiter, …]) |
Find a root of a function in an interval using Ridder’s method. |
bisect (f, a, b[, args, xtol, rtol, maxiter, …]) |
Find root of a function within an interval using bisection. |
newton (func, x0[, fprime, args, tol, …]) |
Find a zero of a real or complex function using the Newton-Raphson (or secant or Halley’s) method. |
toms748 (f, a, b[, args, k, xtol, rtol, …]) |
Find a zero using TOMS Algorithm 748 method. |
RootResults (root, iterations, …) |
Represents the root finding result. |
The root_scalar
function supports the following methods:
The table below lists situations and appropriate methods, along with
asymptotic convergence rates per iteration (and per function evaluation)
for successful convergence to a simple root(*).
Bisection is the slowest of them all, adding one bit of accuracy for each
function evaluation, but is guaranteed to converge.
The other bracketing methods all (eventually) increase the number of accurate
bits by about 50% for every function evaluation.
The derivative-based methods, all built on newton
, can converge quite quickly
if the initial value is close to the root. They can also be applied to
functions defined on (a subset of) the complex plane.
Domain of f |
Bracket? |
Derivatives? |
Solvers |
Convergence |
fprime |
fprime2 |
Guaranteed? |
Rate(s)(*) |
R |
Yes |
N/A |
N/A |
- bisection
- brentq
- brenth
- ridder
- toms748
|
|
- 1 “Linear”
- >=1, <= 1.62
- >=1, <= 1.62
- 2.0 (1.41)
- 2.7 (1.65)
|
R or C |
No |
No |
No |
secant |
No |
1.62 (1.62) |
R or C |
No |
Yes |
No |
newton |
No |
2.00 (1.41) |
R or C |
No |
Yes |
Yes |
halley |
No |
3.00 (1.44) |
Fixed point finding:
fixed_point (func, x0[, args, xtol, maxiter, …]) |
Find a fixed point of the function. |
Multidimensional
root (fun, x0[, args, method, jac, tol, …]) |
Find a root of a vector function. |
The root
function supports the following methods:
Linear Programming
linprog (c[, A_ub, b_ub, A_eq, b_eq, bounds, …]) |
Minimize a linear objective function subject to linear equality and inequality constraints. |
The linprog
function supports the following methods:
The simplex method supports callback functions, such as:
Assignment problems:
Utilities
Finite-Difference Approximation
approx_fprime (xk, f, epsilon, *args) |
Finite-difference approximation of the gradient of a scalar function. |
check_grad (func, grad, x0, *args, **kwargs) |
Check the correctness of a gradient function by comparing it against a (forward) finite-difference approximation of the gradient. |
Line Search
bracket (func[, xa, xb, args, grow_limit, …]) |
Bracket the minimum of the function. |
line_search (f, myfprime, xk, pk[, gfk, …]) |
Find alpha that satisfies strong Wolfe conditions. |
Benchmark Problems
rosen (x) |
The Rosenbrock function. |
rosen_der (x) |
The derivative (i.e. |
rosen_hess (x) |
The Hessian matrix of the Rosenbrock function. |
rosen_hess_prod (x, p) |
Product of the Hessian matrix of the Rosenbrock function with a vector. |
Legacy Functions
The functions below are not recommended for use in new scripts;
all of these methods are accessible via a newer, more consistent
interfaces, provided by the interfaces above.
Optimization
General-purpose multivariate methods:
fmin (func, x0[, args, xtol, ftol, maxiter, …]) |
Minimize a function using the downhill simplex algorithm. |
fmin_powell (func, x0[, args, xtol, ftol, …]) |
Minimize a function using modified Powell’s method. |
fmin_cg (f, x0[, fprime, args, gtol, norm, …]) |
Minimize a function using a nonlinear conjugate gradient algorithm. |
fmin_bfgs (f, x0[, fprime, args, gtol, norm, …]) |
Minimize a function using the BFGS algorithm. |
fmin_ncg (f, x0, fprime[, fhess_p, fhess, …]) |
Unconstrained minimization of a function using the Newton-CG method. |
Constrained multivariate methods:
fmin_l_bfgs_b (func, x0[, fprime, args, …]) |
Minimize a function func using the L-BFGS-B algorithm. |
fmin_tnc (func, x0[, fprime, args, …]) |
Minimize a function with variables subject to bounds, using gradient information in a truncated Newton algorithm. |
fmin_cobyla (func, x0, cons[, args, …]) |
Minimize a function using the Constrained Optimization BY Linear Approximation (COBYLA) method. |
fmin_slsqp (func, x0[, eqcons, f_eqcons, …]) |
Minimize a function using Sequential Least SQuares Programming |
differential_evolution (func, bounds[, args, …]) |
Finds the global minimum of a multivariate function. |
Univariate (scalar) minimization methods:
fminbound (func, x1, x2[, args, xtol, …]) |
Bounded minimization for scalar functions. |
brent (func[, args, brack, tol, full_output, …]) |
Given a function of one-variable and a possible bracket, return the local minimum of the function isolated to a fractional precision of tol. |
golden (func[, args, brack, tol, …]) |
Return the minimum of a function of one variable using golden section method. |
Least-Squares
leastsq (func, x0[, args, Dfun, full_output, …]) |
Minimize the sum of squares of a set of equations. |
Root Finding
General nonlinear solvers:
fsolve (func, x0[, args, fprime, …]) |
Find the roots of a function. |
broyden1 (F, xin[, iter, alpha, …]) |
Find a root of a function, using Broyden’s first Jacobian approximation. |
broyden2 (F, xin[, iter, alpha, …]) |
Find a root of a function, using Broyden’s second Jacobian approximation. |
Large-scale nonlinear solvers:
newton_krylov (F, xin[, iter, rdiff, method, …]) |
Find a root of a function, using Krylov approximation for inverse Jacobian. |
anderson (F, xin[, iter, alpha, w0, M, …]) |
Find a root of a function, using (extended) Anderson mixing. |
Simple iteration solvers:
excitingmixing (F, xin[, iter, alpha, …]) |
Find a root of a function, using a tuned diagonal Jacobian approximation. |
linearmixing (F, xin[, iter, alpha, verbose, …]) |
Find a root of a function, using a scalar Jacobian approximation. |
diagbroyden (F, xin[, iter, alpha, verbose, …]) |
Find a root of a function, using diagonal Broyden Jacobian approximation. |
Additional information on the nonlinear solvers