- scipy.optimize.root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None)¶
Find a root of a vector function.
New in version 0.11.0.
fun : callable
A vector function to find a root of.
x0 : ndarray
args : tuple, optional
Extra arguments passed to the objective function and its Jacobian.
method : str, optional
Type of solver. Should be one of
jac : bool or callable, optional
If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of fun. In this case, it must accept the same arguments as fun.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific options.
callback : function, optional
Optional callback function. It is called on every iteration as callback(x, f) where x is the current solution and f the corresponding residual. For all methods but ‘hybr’ and ‘lm’.
options : dict, optional
A dictionary of solver options. E.g. xtol or maxiter, see show_options() for details.
sol : OptimizeResult
The solution represented as a OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the algorithm exited successfully and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.
- Additional options accepted by the solvers
This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is hybr.
Method hybr uses a modification of the Powell hybrid method as implemented in MINPACK [R110].
Method lm solves the system of nonlinear equations in a least squares sense using a modification of the Levenberg-Marquardt algorithm as implemented in MINPACK [R110].
Methods broyden1, broyden2, anderson, linearmixing, diagbroyden, excitingmixing, krylov are inexact Newton methods, with backtracking or full line searches [R111]. Each method corresponds to a particular Jacobian approximations. See nonlin for details.
- Method broyden1 uses Broyden’s first Jacobian approximation, it is known as Broyden’s good method.
- Method broyden2 uses Broyden’s second Jacobian approximation, it is known as Broyden’s bad method.
- Method anderson uses (extended) Anderson mixing.
- Method Krylov uses Krylov approximation for inverse Jacobian. It is suitable for large-scale problem.
- Method diagbroyden uses diagonal Broyden Jacobian approximation.
- Method linearmixing uses a scalar Jacobian approximation.
- Method excitingmixing uses a tuned diagonal Jacobian approximation.
The algorithms implemented for methods diagbroyden, linearmixing and excitingmixing may be useful for specific problems, but whether they will work may depend strongly on the problem.
[R110] (1, 2, 3) More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1. [R111] (1, 2) C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics. <http://www.siam.org/books/kelley/>
The following functions define a system of nonlinear equations and its jacobian.
>>> def fun(x): ... return [x + 0.5 * (x - x)**3 - 1.0, ... 0.5 * (x - x)**3 + x]
>>> def jac(x): ... return np.array([[1 + 1.5 * (x - x)**2, ... -1.5 * (x - x)**2], ... [-1.5 * (x - x)**2, ... 1 + 1.5 * (x - x)**2]])
A solution can be obtained as follows.
>>> from scipy import optimize >>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr') >>> sol.x array([ 0.8411639, 0.1588361])