scipy.optimize.lsq_linear¶

scipy.optimize.
lsq_linear
(A, b, bounds=(inf, inf), method='trf', tol=1e10, lsq_solver=None, lsmr_tol=None, max_iter=None, verbose=0)[source]¶ Solve a linear leastsquares problem with bounds on the variables.
Given a mbyn design matrix A and a target vector b with m elements,
lsq_linear
solves the following optimization problem:minimize 0.5 * A x  b**2 subject to lb <= x <= ub
This optimization problem is convex, hence a found minimum (if iterations have converged) is guaranteed to be global.
Parameters:  A : array_like, sparse matrix of LinearOperator, shape (m, n)
Design matrix. Can be
scipy.sparse.linalg.LinearOperator
. b : array_like, shape (m,)
Target vector.
 bounds : 2tuple of array_like, optional
Lower and upper bounds on independent variables. Defaults to no bounds. Each array must have shape (n,) or be a scalar, in the latter case a bound will be the same for all variables. Use
np.inf
with an appropriate sign to disable bounds on all or some variables. method : ‘trf’ or ‘bvls’, optional
Method to perform minimization.
 ‘trf’ : Trust Region Reflective algorithm adapted for a linear leastsquares problem. This is an interiorpointlike method and the required number of iterations is weakly correlated with the number of variables.
 ‘bvls’ : BoundedVariable LeastSquares algorithm. This is an active set method, which requires the number of iterations comparable to the number of variables. Can’t be used when A is sparse or LinearOperator.
Default is ‘trf’.
 tol : float, optional
Tolerance parameter. The algorithm terminates if a relative change of the cost function is less than tol on the last iteration. Additionally the firstorder optimality measure is considered:
method='trf'
terminates if the uniform norm of the gradient, scaled to account for the presence of the bounds, is less than tol.method='bvls'
terminates if KarushKuhnTucker conditions are satisfied within tol tolerance.
 lsq_solver : {None, ‘exact’, ‘lsmr’}, optional
Method of solving unbounded leastsquares problems throughout iterations:
 ‘exact’ : Use dense QR or SVD decomposition approach. Can’t be used when A is sparse or LinearOperator.
 ‘lsmr’ : Use
scipy.sparse.linalg.lsmr
iterative procedure which requires only matrixvector product evaluations. Can’t be used withmethod='bvls'
.
If None (default) the solver is chosen based on type of A.
 lsmr_tol : None, float or ‘auto’, optional
Tolerance parameters ‘atol’ and ‘btol’ for
scipy.sparse.linalg.lsmr
If None (default), it is set to1e2 * tol
. If ‘auto’, the tolerance will be adjusted based on the optimality of the current iterate, which can speed up the optimization process, but is not always reliable. max_iter : None or int, optional
Maximum number of iterations before termination. If None (default), it is set to 100 for
method='trf'
or to the number of variables formethod='bvls'
(not counting iterations for ‘bvls’ initialization). verbose : {0, 1, 2}, optional
Level of algorithm’s verbosity:
 0 : work silently (default).
 1 : display a termination report.
 2 : display progress during iterations.
Returns:  OptimizeResult with the following fields defined:
 x : ndarray, shape (n,)
Solution found.
 cost : float
Value of the cost function at the solution.
 fun : ndarray, shape (m,)
Vector of residuals at the solution.
 optimality : float
Firstorder optimality measure. The exact meaning depends on method, refer to the description of tol parameter.
 active_mask : ndarray of int, shape (n,)
Each component shows whether a corresponding constraint is active (that is, whether a variable is at the bound):
 0 : a constraint is not active.
 1 : a lower bound is active.
 1 : an upper bound is active.
Might be somewhat arbitrary for the trf method as it generates a sequence of strictly feasible iterates and active_mask is determined within a tolerance threshold.
 nit : int
Number of iterations. Zero if the unconstrained solution is optimal.
 status : int
Reason for algorithm termination:
 1 : the algorithm was not able to make progress on the last iteration.
 0 : the maximum number of iterations is exceeded.
 1 : the firstorder optimality measure is less than tol.
 2 : the relative change of the cost function is less than tol.
 3 : the unconstrained solution is optimal.
 message : str
Verbal description of the termination reason.
 success : bool
True if one of the convergence criteria is satisfied (status > 0).
See also
nnls
 Linear least squares with nonnegativity constraint.
least_squares
 Nonlinear least squares with bounds on the variables.
Notes
The algorithm first computes the unconstrained leastsquares solution by
numpy.linalg.lstsq
orscipy.sparse.linalg.lsmr
depending on lsq_solver. This solution is returned as optimal if it lies within the bounds.Method ‘trf’ runs the adaptation of the algorithm described in [STIR] for a linear leastsquares problem. The iterations are essentially the same as in the nonlinear leastsquares algorithm, but as the quadratic function model is always accurate, we don’t need to track or modify the radius of a trust region. The line search (backtracking) is used as a safety net when a selected step does not decrease the cost function. Read more detailed description of the algorithm in
scipy.optimize.least_squares
.Method ‘bvls’ runs a Python implementation of the algorithm described in [BVLS]. The algorithm maintains active and free sets of variables, on each iteration chooses a new variable to move from the active set to the free set and then solves the unconstrained leastsquares problem on free variables. This algorithm is guaranteed to give an accurate solution eventually, but may require up to n iterations for a problem with n variables. Additionally, an adhoc initialization procedure is implemented, that determines which variables to set free or active initially. It takes some number of iterations before actual BVLS starts, but can significantly reduce the number of further iterations.
References
[STIR] (1, 2) M. A. Branch, T. F. Coleman, and Y. Li, “A Subspace, Interior, and Conjugate Gradient Method for LargeScale BoundConstrained Minimization Problems,” SIAM Journal on Scientific Computing, Vol. 21, Number 1, pp 123, 1999. [BVLS] (1, 2) P. B. Start and R. L. Parker, “BoundedVariable LeastSquares: an Algorithm and Applications”, Computational Statistics, 10, 129141, 1995. Examples
In this example a problem with a large sparse matrix and bounds on the variables is solved.
>>> from scipy.sparse import rand >>> from scipy.optimize import lsq_linear ... >>> np.random.seed(0) ... >>> m = 20000 >>> n = 10000 ... >>> A = rand(m, n, density=1e4) >>> b = np.random.randn(m) ... >>> lb = np.random.randn(n) >>> ub = lb + 1 ... >>> res = lsq_linear(A, b, bounds=(lb, ub), lsmr_tol='auto', verbose=1) # may vary The relative change of the cost function is less than `tol`. Number of iterations 16, initial cost 1.5039e+04, final cost 1.1112e+04, firstorder optimality 4.66e08.