scipy.optimize.root¶
-
scipy.optimize.
root
(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None)[source]¶ Find a root of a vector function.
- Parameters
- funcallable
A vector function to find a root of.
- x0ndarray
Initial guess.
- argstuple, optional
Extra arguments passed to the objective function and its Jacobian.
- methodstr, optional
Type of solver. Should be one of
‘hybr’ (see here)
‘lm’ (see here)
‘broyden1’ (see here)
‘broyden2’ (see here)
‘anderson’ (see here)
‘linearmixing’ (see here)
‘diagbroyden’ (see here)
‘excitingmixing’ (see here)
‘krylov’ (see here)
‘df-sane’ (see here)
- jacbool or callable, optional
If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of fun. In this case, it must accept the same arguments as fun.
- tolfloat, optional
Tolerance for termination. For detailed control, use solver-specific options.
- callbackfunction, optional
Optional callback function. It is called on every iteration as
callback(x, f)
where x is the current solution and f the corresponding residual. For all methods but ‘hybr’ and ‘lm’.- optionsdict, optional
A dictionary of solver options. E.g. xtol or maxiter, see
show_options()
for details.
- Returns
- solOptimizeResult
The solution represented as a
OptimizeResult
object. Important attributes are:x
the solution array,success
a Boolean flag indicating if the algorithm exited successfully andmessage
which describes the cause of the termination. SeeOptimizeResult
for a description of other attributes.
See also
show_options
Additional options accepted by the solvers
Notes
This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is hybr.
Method hybr uses a modification of the Powell hybrid method as implemented in MINPACK [1].
Method lm solves the system of nonlinear equations in a least squares sense using a modification of the Levenberg-Marquardt algorithm as implemented in MINPACK [1].
Method df-sane is a derivative-free spectral method. [3]
Methods broyden1, broyden2, anderson, linearmixing, diagbroyden, excitingmixing, krylov are inexact Newton methods, with backtracking or full line searches [2]. Each method corresponds to a particular Jacobian approximations. See
nonlin
for details.Method broyden1 uses Broyden’s first Jacobian approximation, it is known as Broyden’s good method.
Method broyden2 uses Broyden’s second Jacobian approximation, it is known as Broyden’s bad method.
Method anderson uses (extended) Anderson mixing.
Method Krylov uses Krylov approximation for inverse Jacobian. It is suitable for large-scale problem.
Method diagbroyden uses diagonal Broyden Jacobian approximation.
Method linearmixing uses a scalar Jacobian approximation.
Method excitingmixing uses a tuned diagonal Jacobian approximation.
Warning
The algorithms implemented for methods diagbroyden, linearmixing and excitingmixing may be useful for specific problems, but whether they will work may depend strongly on the problem.
New in version 0.11.0.
References
- 1(1,2,3)
More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1.
- 2(1,2)
C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics. <https://archive.siam.org/books/kelley/fr16/>
- 3(1,2)
La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006).
Examples
The following functions define a system of nonlinear equations and its jacobian.
>>> def fun(x): ... return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0, ... 0.5 * (x[1] - x[0])**3 + x[1]]
>>> def jac(x): ... return np.array([[1 + 1.5 * (x[0] - x[1])**2, ... -1.5 * (x[0] - x[1])**2], ... [-1.5 * (x[1] - x[0])**2, ... 1 + 1.5 * (x[1] - x[0])**2]])
A solution can be obtained as follows.
>>> from scipy import optimize >>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr') >>> sol.x array([ 0.8411639, 0.1588361])