# numpy.linalg.lstsq¶

numpy.linalg.lstsq(a, b, rcond=-1)

Return the least-squares solution to an equation.

Solves the equation a x = b by computing a vector x that minimizes the norm || b - a x ||.

Parameters: a : array_like, shape (M, N) Input equation coefficients. b : array_like, shape (M,) or (M, K) Equation target values. If b is two-dimensional, the least squares solution is calculated for each of the K target sets. rcond : float, optional Cutoff for small singular values of a. Singular values smaller than rcond times the largest singular value are considered zero. x : ndarray, shape(N,) or (N, K) Least squares solution. The shape of x depends on the shape of b. residues : ndarray, shape(), (1,), or (K,) Sums of residues; squared Euclidian norm for each column in b - a x. If the rank of a is < N or > M, this is an empty array. If b is 1-dimensional, this is a (1,) shape array. Otherwise the shape is (K,). rank : integer Rank of matrix a. s : ndarray, shape(min(M,N),) Singular values of a. LinAlgError : If computation does not converge.

Notes

If b is a matrix, then all array results returned as matrices.

Examples

Fit a line, y = mx + c, through some noisy data-points:

```>>> x = np.array([0, 1, 2, 3])
>>> y = np.array([-1, 0.2, 0.9, 2.1])
```

By examining the coefficients, we see that the line should have a gradient of roughly 1 and cuts the y-axis at more-or-less -1.

We can rewrite the line equation as y = Ap, where A = [[x 1]] and p = [[m], [c]]. Now use lstsq to solve for p:

```>>> A = np.vstack([x, np.ones(len(x))]).T
>>> A
array([[ 0.,  1.],
[ 1.,  1.],
[ 2.,  1.],
[ 3.,  1.]])
```
```>>> m, c = np.linalg.lstsq(A, y)[0]
>>> print m, c
1.0 -0.95
```

Plot the data along with the fitted line:

```>>> import matplotlib.pyplot as plt
>>> plt.plot(x, y, 'o', label='Original data', markersize=10)
>>> plt.plot(x, m*x + c, 'r', label='Fitted line')
>>> plt.legend()
>>> plt.show()
```

#### Previous topic

numpy.linalg.tensorsolve

numpy.linalg.inv