scipy.sparse.linalg.cgs#
- scipy.sparse.linalg.cgs(A, b, x0=None, tol=1e-05, maxiter=None, M=None, callback=None, atol=None)[source]#
Use Conjugate Gradient Squared iteration to solve
Ax = b
.- Parameters:
- A{sparse matrix, ndarray, LinearOperator}
The real-valued N-by-N matrix of the linear system. Alternatively,
A
can be a linear operator which can produceAx
using, e.g.,scipy.sparse.linalg.LinearOperator
.- bndarray
Right hand side of the linear system. Has shape (N,) or (N,1).
- Returns:
- xndarray
The converged solution.
- infointeger
- Provides convergence information:
0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown
- Other Parameters:
- x0ndarray
Starting guess for the solution.
- tol, atolfloat, optional
Tolerances for convergence,
norm(residual) <= max(tol*norm(b), atol)
. The default foratol
is'legacy'
, which emulates a different legacy behavior.Warning
The default value for atol will be changed in a future release. For future compatibility, specify atol explicitly.
- maxiterinteger
Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.
- M{sparse matrix, ndarray, LinearOperator}
Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.
- callbackfunction
User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.
Examples
>>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import cgs >>> R = np.array([[4, 2, 0, 1], ... [3, 0, 0, 2], ... [0, 1, 1, 1], ... [0, 2, 1, 0]]) >>> A = csc_matrix(R) >>> b = np.array([-1, -0.5, -1, 2]) >>> x, exit_code = cgs(A, b) >>> print(exit_code) # 0 indicates successful convergence 0 >>> np.allclose(A.dot(x), b) True