scipy.sparse.linalg.lgmres#
- scipy.sparse.linalg.lgmres(A, b, x0=None, tol=1e-05, maxiter=1000, M=None, callback=None, inner_m=30, outer_k=3, outer_v=None, store_outer_Av=True, prepend_outer_v=False, atol=None)[source]#
Solve a matrix equation using the LGMRES algorithm.
The LGMRES algorithm [1] [2] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.
- Parameters
- A{sparse matrix, ndarray, LinearOperator}
The real or complex N-by-N matrix of the linear system. Alternatively,
A
can be a linear operator which can produceAx
using, e.g.,scipy.sparse.linalg.LinearOperator
.- bndarray
Right hand side of the linear system. Has shape (N,) or (N,1).
- x0ndarray
Starting guess for the solution.
- tol, atolfloat, optional
Tolerances for convergence,
norm(residual) <= max(tol*norm(b), atol)
. The default foratol
is tol.Warning
The default value for atol will be changed in a future release. For future compatibility, specify atol explicitly.
- maxiterint, optional
Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.
- M{sparse matrix, ndarray, LinearOperator}, optional
Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.
- callbackfunction, optional
User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.
- inner_mint, optional
Number of inner GMRES iterations per each outer iteration.
- outer_kint, optional
Number of vectors to carry between inner GMRES iterations. According to [1], good values are in the range of 1…3. However, note that if you want to use the additional vectors to accelerate solving multiple similar problems, larger values may be beneficial.
- outer_vlist of tuples, optional
List containing tuples
(v, Av)
of vectors and corresponding matrix-vector products, used to augment the Krylov subspace, and carried between inner GMRES iterations. The elementAv
can be None if the matrix-vector product should be re-evaluated. This parameter is modified in-place bylgmres
, and can be used to pass “guess” vectors in and out of the algorithm when solving similar problems.- store_outer_Avbool, optional
Whether LGMRES should store also A@v in addition to vectors v in the outer_v list. Default is True.
- prepend_outer_vbool, optional
Whether to put outer_v augmentation vectors before Krylov iterates. In standard LGMRES, prepend_outer_v=False.
- Returns
- xndarray
The converged solution.
- infoint
Provides convergence information:
0 : successful exit
>0 : convergence to tolerance not achieved, number of iterations
<0 : illegal input or breakdown
Notes
The LGMRES algorithm [1] [2] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.
Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.
References
- 1(1,2,3)
A.H. Baker and E.R. Jessup and T. Manteuffel, “A Technique for Accelerating the Convergence of Restarted GMRES”, SIAM J. Matrix Anal. Appl. 26, 962 (2005).
- 2(1,2)
A.H. Baker, “On Improving the Performance of the Linear Solver restarted GMRES”, PhD thesis, University of Colorado (2003).
Examples
>>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import lgmres >>> A = csc_matrix([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float) >>> b = np.array([2, 4, -1], dtype=float) >>> x, exitCode = lgmres(A, b) >>> print(exitCode) # 0 indicates successful convergence 0 >>> np.allclose(A.dot(x), b) True