SciPy

scipy.optimize.leastsq

scipy.optimize.leastsq(func, x0, args=(), Dfun=None, full_output=0, col_deriv=0, ftol=1.49012e-08, xtol=1.49012e-08, gtol=0.0, maxfev=0, epsfcn=None, factor=100, diag=None)[source]

Minimize the sum of squares of a set of equations.

x = arg min(sum(func(y)**2,axis=0))
         y
Parameters:
func : callable

should take at least one (possibly length N vector) argument and returns M floating point numbers. It must not return NaNs or fitting might fail.

x0 : ndarray

The starting estimate for the minimization.

args : tuple, optional

Any extra arguments to func are placed in this tuple.

Dfun : callable, optional

A function or method to compute the Jacobian of func with derivatives across the rows. If this is None, the Jacobian will be estimated.

full_output : bool, optional

non-zero to return all optional outputs.

col_deriv : bool, optional

non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation).

ftol : float, optional

Relative error desired in the sum of squares.

xtol : float, optional

Relative error desired in the approximate solution.

gtol : float, optional

Orthogonality desired between the function vector and the columns of the Jacobian.

maxfev : int, optional

The maximum number of calls to the function. If Dfun is provided then the default maxfev is 100*(N+1) where N is the number of elements in x0, otherwise the default maxfev is 200*(N+1).

epsfcn : float, optional

A variable used in determining a suitable step length for the forward- difference approximation of the Jacobian (for Dfun=None). Normally the actual step length will be sqrt(epsfcn)*x If epsfcn is less than the machine precision, it is assumed that the relative errors are of the order of the machine precision.

factor : float, optional

A parameter determining the initial step bound (factor * || diag * x||). Should be in interval (0.1, 100).

diag : sequence, optional

N positive entries that serve as a scale factors for the variables.

Returns:
x : ndarray

The solution (or the result of the last iteration for an unsuccessful call).

cov_x : ndarray

Uses the fjac and ipvt optional outputs to construct an estimate of the jacobian around the solution. None if a singular matrix encountered (indicates very flat curvature in some direction). This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates – see curve_fit.

infodict : dict

a dictionary of optional outputs with the key s:

nfev

The number of function calls

fvec

The function evaluated at the output

fjac

A permutation of the R matrix of a QR factorization of the final approximate Jacobian matrix, stored column wise. Together with ipvt, the covariance of the estimate can be approximated.

ipvt

An integer array of length N which defines a permutation matrix, p, such that fjac*p = q*r, where r is upper triangular with diagonal elements of nonincreasing magnitude. Column j of p is column ipvt(j) of the identity matrix.

qtf

The vector (transpose(q) * fvec).

mesg : str

A string message giving information about the cause of failure.

ier : int

An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, the solution was not found. In either case, the optional output variable ‘mesg’ gives more information.

Notes

“leastsq” is a wrapper around MINPACK’s lmdif and lmder algorithms.

cov_x is a Jacobian approximation to the Hessian of the least squares objective function. This approximation assumes that the objective function is based on the difference between some observed target data (ydata) and a (non-linear) function of the parameters f(xdata, params)

func(params) = ydata - f(xdata, params)

so that the objective function is

  min   sum((ydata - f(xdata, params))**2, axis=0)
params

The solution, x, is always a 1D array, regardless of the shape of x0, or whether x0 is a scalar.

Previous topic

scipy.optimize.golden

Next topic

scipy.optimize.fsolve