This is documentation for an old release of SciPy (version 0.17.1). Read this page Search for this page in the documentation of the latest stable release (version 1.15.1).
Optimization and root finding (scipy.optimize)
Optimization
Local Optimization
minimize(fun, x0[, args, method, jac, hess, ...]) |
Minimization of scalar function of one or more variables. |
minimize_scalar(fun[, bracket, bounds, ...]) |
Minimization of scalar function of one variable. |
OptimizeResult |
Represents the optimization result. |
OptimizeWarning |
|
The minimize function supports the following methods:
The minimize_scalar function supports the following methods:
The specific optimization method interfaces below in this subsection are
not recommended for use in new scripts; all of these methods are accessible
via a newer, more consistent interface provided by the functions above.
General-purpose multivariate methods:
fmin(func, x0[, args, xtol, ftol, maxiter, ...]) |
Minimize a function using the downhill simplex algorithm. |
fmin_powell(func, x0[, args, xtol, ftol, ...]) |
Minimize a function using modified Powell’s method. |
fmin_cg(f, x0[, fprime, args, gtol, norm, ...]) |
Minimize a function using a nonlinear conjugate gradient algorithm. |
fmin_bfgs(f, x0[, fprime, args, gtol, norm, ...]) |
Minimize a function using the BFGS algorithm. |
fmin_ncg(f, x0, fprime[, fhess_p, fhess, ...]) |
Unconstrained minimization of a function using the Newton-CG method. |
Constrained multivariate methods:
fmin_l_bfgs_b(func, x0[, fprime, args, ...]) |
Minimize a function func using the L-BFGS-B algorithm. |
fmin_tnc(func, x0[, fprime, args, ...]) |
Minimize a function with variables subject to bounds, using gradient information in a truncated Newton algorithm. |
fmin_cobyla(func, x0, cons[, args, ...]) |
Minimize a function using the Constrained Optimization BY Linear Approximation (COBYLA) method. |
fmin_slsqp(func, x0[, eqcons, f_eqcons, ...]) |
Minimize a function using Sequential Least SQuares Programming |
differential_evolution(func, bounds[, args, ...]) |
Finds the global minimum of a multivariate function. |
Univariate (scalar) minimization methods:
fminbound(func, x1, x2[, args, xtol, ...]) |
Bounded minimization for scalar functions. |
brent(func[, args, brack, tol, full_output, ...]) |
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol. |
golden(func[, args, brack, tol, full_output]) |
Return the minimum of a function of one variable. |
Equation (Local) Minimizers
leastsq(func, x0[, args, Dfun, full_output, ...]) |
Minimize the sum of squares of a set of equations. |
least_squares(fun, x0[, jac, bounds, ...]) |
Solve a nonlinear least-squares problem with bounds on the variables. |
nnls(A, b) |
Solve argmin_x || Ax - b ||_2 for x>=0. |
lsq_linear(A, b[, bounds, method, tol, ...]) |
Solve a linear least-squares problem with bounds on the variables. |
Global Optimization
basinhopping(func, x0[, niter, T, stepsize, ...]) |
Find the global minimum of a function using the basin-hopping algorithm |
brute(func, ranges[, args, Ns, full_output, ...]) |
Minimize a function over a given range by brute force. |
differential_evolution(func, bounds[, args, ...]) |
Finds the global minimum of a multivariate function. |
Rosenbrock function
rosen(x) |
The Rosenbrock function. |
rosen_der(x) |
The derivative (i.e. |
rosen_hess(x) |
The Hessian matrix of the Rosenbrock function. |
rosen_hess_prod(x, p) |
Product of the Hessian matrix of the Rosenbrock function with a vector. |
Fitting
curve_fit(f, xdata, ydata[, p0, sigma, ...]) |
Use non-linear least squares to fit a function, f, to data. |
Root finding
Scalar functions
brentq(f, a, b[, args, xtol, rtol, maxiter, ...]) |
Find a root of a function in given interval. |
brenth(f, a, b[, args, xtol, rtol, maxiter, ...]) |
Find root of f in [a,b]. |
ridder(f, a, b[, args, xtol, rtol, maxiter, ...]) |
Find a root of a function in an interval. |
bisect(f, a, b[, args, xtol, rtol, maxiter, ...]) |
Find root of a function within an interval. |
newton(func, x0[, fprime, args, tol, ...]) |
Find a zero using the Newton-Raphson or secant method. |
Fixed point finding:
fixed_point(func, x0[, args, xtol, maxiter, ...]) |
Find a fixed point of the function. |
Multidimensional
General nonlinear solvers:
root(fun, x0[, args, method, jac, tol, ...]) |
Find a root of a vector function. |
fsolve(func, x0[, args, fprime, ...]) |
Find the roots of a function. |
broyden1(F, xin[, iter, alpha, ...]) |
Find a root of a function, using Broyden’s first Jacobian approximation. |
broyden2(F, xin[, iter, alpha, ...]) |
Find a root of a function, using Broyden’s second Jacobian approximation. |
The root function supports the following methods:
Large-scale nonlinear solvers:
newton_krylov(F, xin[, iter, rdiff, method, ...]) |
Find a root of a function, using Krylov approximation for inverse Jacobian. |
anderson(F, xin[, iter, alpha, w0, M, ...]) |
Find a root of a function, using (extended) Anderson mixing. |
Simple iterations:
excitingmixing(F, xin[, iter, alpha, ...]) |
Find a root of a function, using a tuned diagonal Jacobian approximation. |
linearmixing(F, xin[, iter, alpha, verbose, ...]) |
Find a root of a function, using a scalar Jacobian approximation. |
diagbroyden(F, xin[, iter, alpha, verbose, ...]) |
Find a root of a function, using diagonal Broyden Jacobian approximation. |
Additional information on the nonlinear solvers
Linear Programming
Simplex Algorithm:
linprog(c[, A_ub, b_ub, A_eq, b_eq, bounds, ...]) |
Minimize a linear objective function subject to linear equality and inequality constraints. |
linprog_verbose_callback(xk, **kwargs) |
A sample callback function demonstrating the linprog callback interface. |
The linprog function supports the following methods:
Assignment problems:
Utilities
approx_fprime(xk, f, epsilon, *args) |
Finite-difference approximation of the gradient of a scalar function. |
bracket(func[, xa, xb, args, grow_limit, ...]) |
Bracket the minimum of the function. |
check_grad(func, grad, x0, *args, **kwargs) |
Check the correctness of a gradient function by comparing it against a (forward) finite-difference approximation of the gradient. |
line_search(f, myfprime, xk, pk[, gfk, ...]) |
Find alpha that satisfies strong Wolfe conditions. |
show_options([solver, method, disp]) |
Show documentation for additional options of optimization solvers. |
LbfgsInvHessProduct(sk, yk) |
Linear operator for the L-BFGS approximate inverse Hessian. |