Optimization and root finding (scipy.optimize)

Optimization

General-purpose

fmin(func, x0[, args, xtol, ftol, maxiter, ...]) Minimize a function using the downhill simplex algorithm.
fmin_powell(func, x0[, args, xtol, ftol, ...]) Minimize a function using modified Powell’s method. This method
fmin_cg(f, x0[, fprime, args, gtol, norm, ...]) Minimize a function using a nonlinear conjugate gradient algorithm.
fmin_bfgs(f, x0[, fprime, args, gtol, norm, ...]) Minimize a function using the BFGS algorithm.
fmin_ncg(f, x0, fprime[, fhess_p, fhess, ...]) Unconstrained minimization of a function using the Newton-CG method.
leastsq(func, x0[, args, Dfun, full_output, ...]) Minimize the sum of squares of a set of equations.

Constrained (multivariate)

fmin_l_bfgs_b(func, x0[, fprime, args, ...]) Minimize a function func using the L-BFGS-B algorithm.
fmin_tnc(func, x0[, fprime, args, ...]) Minimize a function with variables subject to bounds, using
fmin_cobyla(func, x0, cons[, args, ...]) Minimize a function using the Constrained Optimization BY Linear
fmin_slsqp(func, x0[, eqcons, f_eqcons, ...]) Minimize a function using Sequential Least SQuares Programming
nnls(A, b) Solve argmin_x || Ax - b ||_2 for x>=0. This is a wrapper

Global

anneal(func, x0[, args, schedule, ...]) Minimize a function using simulated annealing.
brute(func, ranges[, args, Ns, full_output, ...]) Minimize a function over a given range by brute force.

Scalar function minimizers

fminbound(func, x1, x2[, args, xtol, ...]) Bounded minimization for scalar functions.
brent(func[, args, brack, tol, full_output, ...]) Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
golden(func[, args, brack, tol, full_output]) Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
bracket(func[, xa, xb, args, grow_limit, ...]) Given a function and distinct initial points, search in the downhill direction (as defined by the initital points) and return new points xa, xb, xc that bracket the minimum of the function f(xa) > f(xb) < f(xc).

Fitting

curve_fit(f, xdata, ydata[, p0, sigma]) Use non-linear least squares to fit a function, f, to data.

Root finding

Scalar functions

brentq(f, a, b[, args, xtol, rtol, maxiter, ...]) Find a root of a function in given interval.
brenth(f, a, b[, args, xtol, rtol, maxiter, ...]) Find root of f in [a,b].
ridder(f, a, b[, args, xtol, rtol, maxiter, ...]) Find a root of a function in an interval.
bisect(f, a, b[, args, xtol, rtol, maxiter, ...]) Find root of f in [a,b].
newton(func, x0[, fprime, args, tol, maxiter]) Find a zero using the Newton-Raphson or secant method.

Fixed point finding:

fixed_point(func, x0[, args, xtol, maxiter]) Find the point where func(x) == x

Multidimensional

General nonlinear solvers:

fsolve(func, x0[, args, fprime, ...]) Find the roots of a function.
broyden1(F, xin[, iter, alpha, ...]) Find a root of a function, using Broyden’s first Jacobian approximation.
broyden2(F, xin[, iter, alpha, ...]) Find a root of a function, using Broyden’s second Jacobian approximation.

Large-scale nonlinear solvers:

newton_krylov(F, xin[, iter, rdiff, method, ...]) Find a root of a function, using Krylov approximation for inverse Jacobian.
anderson(F, xin[, iter, alpha, w0, M, ...]) Find a root of a function, using (extended) Anderson mixing.

Simple iterations:

excitingmixing(F, xin[, iter, alpha, ...]) Find a root of a function, using a tuned diagonal Jacobian approximation.
linearmixing(F, xin[, iter, alpha, verbose, ...]) Find a root of a function, using a scalar Jacobian approximation.
diagbroyden(F, xin[, iter, alpha, verbose, ...]) Find a root of a function, using diagonal Broyden Jacobian approximation.

Additional information on the nonlinear solvers

Utility Functions

line_search(f, myfprime, xk, pk[, gfk, ...]) Find alpha that satisfies strong Wolfe conditions.
check_grad(func, grad, x0, *args) Check the correctness of a gradient function by comparing it against a finite-difference approximation of the gradient.