Return the leastsquares solution to a linear matrix equation.
Solves the equation a x = b by computing a vector x that minimizes the norm  b  a x . The equation may be under, well, or over determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for roundoff error) is the “exact” solution of the equation.
Parameters:  a : array_like, shape (M, N)
b : array_like, shape (M,) or (M, K)
rcond : float, optional


Returns:  x : ndarray, shape (N,) or (N, K)
residues : ndarray, shape (), (1,), or (K,)
rank : int
s : ndarray, shape (min(M,N),)

Raises:  LinAlgError :

Notes
If b is a matrix, then all array results are returned as matrices.
Examples
Fit a line, y = mx + c, through some noisy datapoints:
>>> x = np.array([0, 1, 2, 3])
>>> y = np.array([1, 0.2, 0.9, 2.1])
By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the yaxis at, more or less, 1.
We can rewrite the line equation as y = Ap, where A = [[x 1]] and p = [[m], [c]]. Now use lstsq to solve for p:
>>> A = np.vstack([x, np.ones(len(x))]).T
>>> A
array([[ 0., 1.],
[ 1., 1.],
[ 2., 1.],
[ 3., 1.]])
>>> m, c = np.linalg.lstsq(A, y)[0]
>>> print m, c
1.0 0.95
Plot the data along with the fitted line:
>>> import matplotlib.pyplot as plt
>>> plt.plot(x, y, 'o', label='Original data', markersize=10)
>>> plt.plot(x, m*x + c, 'r', label='Fitted line')
>>> plt.legend()
>>> plt.show()
Output