Sparse linear algebra (scipy.sparse.linalg)

Warning

This documentation is work-in-progress and unorganized.

Sparse Linear Algebra

The submodules of sparse.linalg:
  1. eigen: sparse eigenvalue problem solvers
  2. isolve: iterative methods for solving linear systems
  3. dsolve: direct factorization methods for solving linear systems

Examples

exception scipy.sparse.linalg.ArpackError(info, infodict={0: 'Normal exit.', 1: 'Maximum number of iterations taken. All possible eigenvalues of OP has been found.', 2: 'No longer an informational error. Deprecated starting with release 2 of ARPACK.', 3: 'No shifts could be applied during a cycle of the Implicitly restarted Arnoldi iteration. One possibility is to increase the size of NCV relative to NEV. ', -9999: 'Could not build an Arnoldi factorization. IPARAM(5) returns the size of the current Arnoldi factorization. The user is advised to check that enough workspace and array storage has been allocated.', -13: "NEV and WHICH = 'BE' are incompatable. ", -12: 'IPARAM(1) must be equal to 0 or 1.', -1: 'N must be positive.', -10: 'IPARAM(7) must be 1, 2, 3, 4, 5.', -9: 'Starting vector is zero.', -8: 'Error return from trid. eigenvalue calculation; Informational error from LAPACK routine dsteqr .', -7: 'Length of private work array WORKL is not sufficient.', -6: "BMAT must be one of 'I' or 'G'.", -5: "WHICH must be one of 'LM', 'SM', 'LA', 'SA' or 'BE'.", -4: 'The maximum number of Arnoldi update iterations allowed must be greater than zero.', -3: 'NCV must be greater than NEV and less than or equal to N.', -2: 'NEV must be positive.', -11: "IPARAM(7) = 1 and BMAT = 'G' are incompatable."})

ARPACK error

exception scipy.sparse.linalg.ArpackNoConvergence(msg, eigenvalues, eigenvectors)

ARPACK iteration did not converge

Attributes

eigenvalues ndarray Partial result. Converged eigenvalues.
eigenvectors ndarray Partial result. Converged eigenvectors.
class scipy.sparse.linalg.LinearOperator(shape, matvec, rmatvec=None, matmat=None, dtype=None)

Common interface for performing matrix vector products

Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrix-like objects.

Parameters :

shape : tuple

Matrix dimensions (M,N)

matvec : callable f(v)

Returns returns A * v.

Other Parameters:
 

rmatvec : callable f(v)

Returns A^H * v, where A^H is the conjugate transpose of A.

matmat : callable f(V)

Returns A * V, where V is a dense matrix with dimensions (N,K).

dtype : dtype

Data type of the matrix.

See also

aslinearoperator
Construct LinearOperators

Notes

The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.

Examples

>>> from scipy.sparse.linalg import LinearOperator
>>> from scipy import *
>>> def mv(v):
...     return array([ 2*v[0], 3*v[1]])
...
>>> A = LinearOperator( (2,2), matvec=mv )
>>> A
<2x2 LinearOperator with unspecified dtype>
>>> A.matvec( ones(2) )
array([ 2.,  3.])
>>> A * ones(2)
array([ 2.,  3.])

Methods

matmat
matvec
matmat(X)

Matrix-matrix multiplication

Performs the operation y=A*X where A is an MxN linear operator and X dense N*K matrix or ndarray.

Parameters :

X : {matrix, ndarray}

An array with shape (N,K).

Returns :

Y : {matrix, ndarray}

A matrix or ndarray with shape (M,K) depending on the type of the X argument.

Notes

This matmat wraps any user-specified matmat routine to ensure that y has the correct type.

matvec(x)

Matrix-vector multiplication

Performs the operation y=A*x where A is an MxN linear operator and x is a column vector or rank-1 array.

Parameters :

x : {matrix, ndarray}

An array with shape (N,) or (N,1).

Returns :

y : {matrix, ndarray}

A matrix or ndarray with shape (M,) or (M,1) depending on the type and shape of the x argument.

Notes

This matvec wraps the user-specified matvec routine to ensure that y has the correct shape and type.

scipy.sparse.linalg.Tester

alias of NoseTester

scipy.sparse.linalg.aslinearoperator(A)

Return A as a LinearOperator.

‘A’ may be any of the following types:
  • ndarray
  • matrix
  • sparse matrix (e.g. csr_matrix, lil_matrix, etc.)
  • LinearOperator
  • An object with .shape and .matvec attributes

See the LinearOperator documentation for additonal information.

Examples

>>> from scipy import matrix
>>> M = matrix( [[1,2,3],[4,5,6]], dtype='int32' )
>>> aslinearoperator( M )
<2x3 LinearOperator with dtype=int32>
scipy.sparse.linalg.bicg(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use BIConjugate Gradient iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is deprecated – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

scipy.sparse.linalg.bicgstab(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use BIConjugate Gradient STABilized iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is deprecated – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

scipy.sparse.linalg.cg(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use Conjugate Gradient iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is deprecated – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

scipy.sparse.linalg.cgs(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use Conjugate Gradient Squared iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is deprecated – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

class scipy.sparse.linalg.csc_matrix(arg1, shape=None, dtype=None, copy=False)

Compressed Sparse Column matrix

This can be instantiated in several ways:
csc_matrix(D)
with a dense matrix or rank-2 ndarray D
csc_matrix(S)
with another sparse matrix S (equivalent to S.tocsc())
csc_matrix((M, N), [dtype])
to construct an empty matrix with shape (M, N) dtype is optional, defaulting to dtype=’d’.
csc_matrix((data, ij), [shape=(M, N)])
where data and ij satisfy the relationship a[ij[0, k], ij[1, k]] = data[k]
csc_matrix((data, indices, indptr), [shape=(M, N)])
is the standard CSC representation where the row indices for column i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. If the shape parameter is not supplied, the matrix dimensions are inferred from the index arrays.

Notes

Advantages of the CSC format
  • efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
  • efficient column slicing
  • fast matrix vector products (CSR, BSR may be faster)
Disadvantages of the CSC format
  • slow row slicing operations (consider CSR)
  • changes to the sparsity structure are expensive (consider LIL or DOK)

Examples

>>> from scipy.sparse import *
>>> from scipy import *
>>> csc_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
        [0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=int8)
>>> row = array([0,2,2,0,1,2])
>>> col = array([0,0,1,2,2,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 4],
        [0, 0, 5],
        [2, 3, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 4],
        [0, 0, 5],
        [2, 3, 6]])

Methods

asformat
asfptype
astype
check_format
conj
conjugate
copy Generic (shallow and deep) copying operations.
diagonal
dot
eliminate_zeros
getH
get_shape
getcol
getformat
getmaxprint
getnnz
getrow
mean
multiply
nonzero
prune
reshape
set_shape
setdiag
sort_indices
sorted_indices
sum
sum_duplicates
toarray
tobsr
tocoo
tocsc
tocsr
todense
todia
todok
tolil
transpose
class scipy.sparse.linalg.csr_matrix(arg1, shape=None, dtype=None, copy=False)

Compressed Sparse Row matrix

This can be instantiated in several ways:
csr_matrix(D)
with a dense matrix or rank-2 ndarray D
csr_matrix(S)
with another sparse matrix S (equivalent to S.tocsr())
csr_matrix((M, N), [dtype])
to construct an empty matrix with shape (M, N) dtype is optional, defaulting to dtype=’d’.
csr_matrix((data, ij), [shape=(M, N)])
where data and ij satisfy the relationship a[ij[0, k], ij[1, k]] = data[k]
csr_matrix((data, indices, indptr), [shape=(M, N)])
is the standard CSR representation where the column indices for row i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. If the shape parameter is not supplied, the matrix dimensions are inferred from the index arrays.

Notes

Advantages of the CSR format
  • efficient arithmetic operations CSR + CSR, CSR * CSR, etc.
  • efficient row slicing
  • fast matrix vector products
Disadvantages of the CSR format
  • slow column slicing operations (consider CSC)
  • changes to the sparsity structure are expensive (consider LIL or DOK)

Examples

>>> from scipy.sparse import *
>>> from scipy import *
>>> csr_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
        [0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=int8)
>>> row = array([0,0,1,2,2,2])
>>> col = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 2],
        [0, 0, 3],
        [4, 5, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 2],
        [0, 0, 3],
        [4, 5, 6]])

Methods

asformat
asfptype
astype
check_format
conj
conjugate
copy Generic (shallow and deep) copying operations.
diagonal
dot
eliminate_zeros
getH
get_shape
getcol
getformat
getmaxprint
getnnz
getrow
mean
multiply
nonzero
prune
reshape
set_shape
setdiag
sort_indices
sorted_indices
sum
sum_duplicates
toarray
tobsr
tocoo
tocsc
tocsr
todense
todia
todok
tolil
transpose
scipy.sparse.linalg.eigs(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True)

Find k eigenvalues and eigenvectors of the square matrix A.

Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].

Parameters :

A : matrix, array, or object with matvec(x) method

An N x N matrix, array, or an object with matvec(x) method to perform the matrix vector product A * x. The sparse matrix formats in scipy.sparse are appropriate for A.

k : integer

The number of eigenvalues and eigenvectors desired. k must be smaller than N. It is not possible to compute all eigenvectors of a matrix.

Returns :

w : array

Array of k eigenvalues.

v : array

An array of k eigenvectors. v[:, i] is the eigenvector corresponding to the eigenvalue w[i].

Other Parameters:
 

M : matrix or array

(Not implemented) A symmetric positive-definite matrix for the generalized eigenvalue problem A * x = w * M * x.

sigma : real or complex

(Not implemented) Find eigenvalues near sigma. Shift spectrum by sigma.

v0 : array

Starting vector for iteration.

ncv : integer

The number of Lanczos vectors generated ncv must be greater than k; it is recommended that ncv > 2*k.

which : string

Which k eigenvectors and eigenvalues to find:

  • ‘LM’ : largest magnitude
  • ‘SM’ : smallest magnitude
  • ‘LR’ : largest real part
  • ‘SR’ : smallest real part
  • ‘LI’ : largest imaginary part
  • ‘SI’ : smallest imaginary part

maxiter : integer

Maximum number of Arnoldi update iterations allowed

tol : float

Relative accuracy for eigenvalues (stopping criterion) The default value of 0 implies machine precision.

return_eigenvectors : boolean

Return eigenvectors (True) in addition to eigenvalues

Raises :

ArpackNoConvergence :

When the requested convergence is obtained.

The currently converged eigenvalues and eigenvectors can be found as eigenvalues and eigenvectors attributes of the exception object.

See also

eigsh
eigenvalues and eigenvectors for symmetric matrix A

Notes

This function is a wrapper to the ARPACK [R162] SNEUPD, DNEUPD, CNEUPD, ZNEUPD, functions which use the Implicitly Restarted Arnoldi Method to find the eigenvalues and eigenvectors [R163].

References

[R162](1, 2) ARPACK Software, http://www.caam.rice.edu/software/ARPACK/
[R163](1, 2) R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998.

Examples

Find 6 eigenvectors of the identity matrix:

>>> id = np.identity(13)
>>> vals, vecs = sp.sparse.linalg.eigs(id, k=6)
>>> vals
array([ 1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j])
>>> vecs.shape
(13, 6)
scipy.sparse.linalg.eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True)

Find k eigenvalues and eigenvectors of the real symmetric square matrix A.

Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].

Parameters :

A : matrix or array with real entries or object with matvec(x) method

An N x N real symmetric matrix or array or an object with matvec(x) method to perform the matrix vector product A * x. The sparse matrix formats in scipy.sparse are appropriate for A.

k : integer

The number of eigenvalues and eigenvectors desired.

Returns :

w : array

Array of k eigenvalues

v : array

An array of k eigenvectors The v[i] is the eigenvector corresponding to the eigenvector w[i]

Other Parameters:
 

M : matrix or array

(Not implemented) A symmetric positive-definite matrix for the generalized eigenvalue problem A * x = w * M * x

sigma : real

(Not implemented) Find eigenvalues near sigma. Shift spectrum by sigma.

v0 : array

Starting vector for iteration.

ncv : integer

The number of Lanczos vectors generated ncv must be greater than k and smaller than n; it is recommended that ncv > 2*k

which : string

Which k eigenvectors and eigenvalues to find:
  • ‘LA’ : Largest (algebraic) eigenvalues

  • ‘SA’ : Smallest (algebraic) eigenvalues

  • ‘LM’ : Largest (in magnitude) eigenvalues

  • ‘SM’ : Smallest (in magnitude) eigenvalues

  • ‘BE’ : Half (k/2) from each end of the spectrum

    When k is odd, return one more (k/2+1) from the high end

maxiter : integer

Maximum number of Arnoldi update iterations allowed

tol : float

Relative accuracy for eigenvalues (stopping criterion). The default value of 0 implies machine precision.

return_eigenvectors : boolean

Return eigenvectors (True) in addition to eigenvalues

Raises :

ArpackNoConvergence :

When the requested convergence is obtained.

The currently converged eigenvalues and eigenvectors can be found as eigenvalues and eigenvectors attributes of the exception object.

See also

eigs
eigenvalues and eigenvectors for a general (nonsymmetric) matrix A

Notes

This function is a wrapper to the ARPACK [R164] SSEUPD and DSEUPD functions which use the Implicitly Restarted Lanczos Method to find the eigenvalues and eigenvectors [R165].

References

[R164](1, 2) ARPACK Software, http://www.caam.rice.edu/software/ARPACK/
[R165](1, 2) R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998.

Examples

>>> id = np.identity(13)
>>> vals, vecs = sp.sparse.linalg.eigsh(id, k=6)
>>> vals
array([ 1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j])
>>> vecs.shape
(13, 6)
scipy.sparse.linalg.factorized(A)

Return a fuction for solving a sparse linear system, with A pre-factorized.

Example:
solve = factorized( A ) # Makes LU decomposition. x1 = solve( rhs1 ) # Uses the LU factors. x2 = solve( rhs2 ) # Uses again the LU factors.
scipy.sparse.linalg.gmres(A, b, x0=None, tol=1.0000000000000001e-05, restart=None, maxiter=None, xtype=None, M=None, callback=None, restrt=None)

Use Generalized Minimal RESidual iteration to solve A x = b.

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : int

Provides convergence information:
  • 0 : successful exit
  • >0 : convergence to tolerance not achieved, number of iterations
  • <0 : illegal input or breakdown
Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution (a vector of zeros by default).

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

restart : int, optional

Number of iterations between restarts. Larger values increase iteration cost, but may be necessary for convergence. Default is 20.

maxiter : int, optional

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Inverse of the preconditioner of A. M should approximate the inverse of A and be easy to solve for (see Notes). Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance. By default, no preconditioner is used.

callback : function

User-supplied function to call after each iteration. It is called as callback(rk), where rk is the current residual vector.

See also

LinearOperator

Notes

A preconditioner, P, is chosen such that P is close to A but easy to solve for. The preconditioner parameter required by this routine is M = P^-1. The inverse should preferably not be calculated explicitly. Rather, use the following template to produce M:

# Construct a linear operator that computes P^-1 * x.
import scipy.sparse.linalg as spla
M_x = lambda x: spla.spsolve(P, x)
M = spla.LinearOperator((n, n), M_x)
scipy.sparse.linalg.lgmres(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=1000, M=None, callback=None, inner_m=30, outer_k=3, outer_v=None, store_outer_Av=True)

Solve a matrix equation using the LGMRES algorithm.

The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

Returns :

x : array or matrix

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Notes

The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.

Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.

References

[BJM](1, 2, 3) A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005).
[BPh](1, 2, 3) A.H. Baker, PhD thesis, University of Colorado (2003). http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps
scipy.sparse.linalg.lobpcg(A, X, B=None, M=None, Y=None, tol=None, maxiter=20, largest=True, verbosityLevel=0, retLambdaHistory=False, retResidualNormsHistory=False)

Solve symmetric partial eigenproblems with optional preconditioning

This function implements the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The symmetric linear operator of the problem, usually a sparse matrix. Often called the “stiffness matrix”.

X : array_like

Initial approximation to the k eigenvectors. If A has shape=(n,n) then X should have shape shape=(n,k).

B : {dense matrix, sparse matrix, LinearOperator}, optional

the right hand side operator in a generalized eigenproblem. by default, B = Identity often called the “mass matrix”

M : {dense matrix, sparse matrix, LinearOperator}, optional

preconditioner to A; by default M = Identity M should approximate the inverse of A

Y : array_like, optional

n-by-sizeY matrix of constraints, sizeY < n The iterations will be performed in the B-orthogonal complement of the column-space of Y. Y must be full rank.

Returns :

w : array

Array of k eigenvalues

v : array

An array of k eigenvectors. V has the same shape as X.

Other Parameters:
 

tol : scalar, optional

Solver tolerance (stopping criterion) by default: tol=n*sqrt(eps)

maxiter: integer, optional :

maximum number of iterations by default: maxiter=min(n,20)

largest : boolean, optional

when True, solve for the largest eigenvalues, otherwise the smallest

verbosityLevel : integer, optional

controls solver output. default: verbosityLevel = 0.

retLambdaHistory : boolean, optional

whether to return eigenvalue history

retResidualNormsHistory : boolean, optional

whether to return history of residual norms

Notes

If both retLambdaHistory and retResidualNormsHistory are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)

scipy.sparse.linalg.lsqr(A, b, damp=0.0, atol=1e-08, btol=1e-08, conlim=100000000.0, iter_lim=None, show=False, calc_var=False)

Find the least-squares solution to a large, sparse, linear system of equations.

The function solves Ax = b or min ||b - Ax||^2 or ``min ||Ax - b||^2 + d^2 ||x||^2.

The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.

1. Unsymmetric equations --    solve  A*x = b

2. Linear least squares  --    solve  A*x = b
                               in the least-squares sense

3. Damped least squares  --    solve  (   A    )*x = ( b )
                                      ( damp*I )     ( 0 )
                               in the least-squares sense
Parameters :

A : {sparse matrix, ndarray, LinearOperatorLinear}

Representation of an mxn matrix. It is required that the linear operator can produce Ax and A^T x.

b : (m,) ndarray

Right-hand side vector b.

damp : float

Damping coefficient.

atol, btol : float

Stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.)

conlim : float

Another stopping tolerance. lsqr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive.

iter_lim : int

Explicit limitation on number of iterations (for safety).

show : bool

Display an iteration log.

calc_var : bool

Whether to estimate diagonals of (A'A + damp^2*I)^{-1}.

Returns :

x : ndarray of float

The final solution.

istop : int

Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the least-squares problem.

itn : int

Iteration number upon termination.

r1norm : float

norm(r), where r = b - Ax.

r2norm : float

sqrt( norm(r)^2  +  damp^2 * norm(x)^2 ). Equal to r1norm if damp == 0.

anorm : float

Estimate of Frobenius norm of Abar = [[A]; [damp*I]].

acond : float

Estimate of cond(Abar).

arnorm : float

Estimate of norm(A'*r - damp^2*x).

xnorm : float

norm(x)

var : ndarray of float

If calc_var is True, estimates all diagonals of (A'A)^{-1} (if damp == 0) or more generally (A'A + damp^2*I)^{-1}. This is well defined if A has full column rank or damp > 0. (Not sure what var means if rank(A) < n and damp = 0.)

Notes

LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.

For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.

In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).

In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.

The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.

If some initial estimate x0 is known and if damp == 0, one could proceed as follows:

  1. Compute a residual vector r0 = b - A*x0.
  2. Use LSQR to solve the system A*dx = r0.
  3. Add the correction dx to obtain a final solution x = x0 + dx.

This requires that x0 be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.

Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system M*x = b efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system A*M(inverse)*z = b, after which x can be recovered by solving M*x = z.

If A is symmetric, LSQR should not be used!

Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).

References

[R166]C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 43-71.
[R167]C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195-209.
[R168]M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588-604.
scipy.sparse.linalg.minres(A, b, x0=None, shift=0.0, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None, show=False, check=False)

Use MINimum RESidual iteration to solve Ax=b

MINRES minimizes norm(A*x - b) for the symmetric matrix A. Unlike the Conjugate Gradient method, A can be indefinite or singular.

If shift != 0 then the method solves (A - shift*I)x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is deprecated – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

Notes

THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE!

References

Solution of sparse indefinite systems of linear equations,
C. C. Paige and M. A. Saunders (1975), SIAM J. Numer. Anal. 12(4), pp. 617-629. http://www.stanford.edu/group/SOL/software/minres.html
This file is a translation of the following MATLAB implementation:
http://www.stanford.edu/group/SOL/software/minres/matlab/
scipy.sparse.linalg.qmr(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M1=None, M2=None, callback=None)

Use Quasi-Minimal Residual iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Returns :

x : {array, matrix}

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Other Parameters:
 

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M1 : {sparse matrix, dense matrix, LinearOperator}

Left preconditioner for A.

M2 : {sparse matrix, dense matrix, LinearOperator}

Right preconditioner for A. Used together with the left preconditioner M1. The matrix M1*A*M2 should have better conditioned than A alone.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

xtype : {‘f’,’d’,’F’,’D’}

This parameter is DEPRECATED – avoid using it.

The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by LinearOperator.

See also

LinearOperator

scipy.sparse.linalg.spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=None)

Compute an incomplete LU decomposition for a sparse, square matrix A.

The resulting object is an approximation to the inverse of A.

Parameters :

A :

Sparse matrix to factorize

drop_tol : float, optional

Drop tolerance (0 <= tol <= 1) for an incomplete LU decomposition. (default: 1e-4)

fill_factor : float, optional

Specifies the fill ratio upper bound (>= 1.0) for ILU. (default: 10)

drop_rule : str, optional

Comma-separated string of drop rules to use. Available rules: basic, prows, column, area, secondary, dynamic, interp. (Default: basic,area)

See SuperLU documentation for details.

milu : str, optional

Which version of modified ILU to use. (Choices: silu, smilu_1, smilu_2 (default), smilu_3.)

Remaining other options :

Same as for splu

Returns :

invA_approx : scipy.sparse.linalg.dsolve._superlu.SciPyLUType

Object, which has a solve method.

See also

splu
complete LU decomposition

Notes

To improve the better approximation to the inverse, you may need to increase fill_factor AND decrease drop_tol.

This function uses the SuperLU library.

References

[SLU]SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/
scipy.sparse.linalg.splu(A, permc_spec=None, diag_pivot_thresh=None, drop_tol=None, relax=None, panel_size=None, options={})

Compute the LU decomposition of a sparse, square matrix.

Parameters :

A :

Sparse matrix to factorize. Should be in CSR or CSC format.

permc_spec : str, optional

How to permute the columns of the matrix for sparsity preservation. (default: ‘COLAMD’)

  • NATURAL: natural ordering.
  • MMD_ATA: minimum degree ordering on the structure of A^T A.
  • MMD_AT_PLUS_A: minimum degree ordering on the structure of A^T+A.
  • COLAMD: approximate minimum degree column ordering

diag_pivot_thresh : float, optional

Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user’s guide for details [SLU]

drop_tol : float, optional

(deprecated) No effect.

relax : int, optional

Expert option for customizing the degree of relaxing supernodes. See SuperLU user’s guide for details [SLU]

panel_size : int, optional

Expert option for customizing the panel size. See SuperLU user’s guide for details [SLU]

options : dict, optional

Dictionary containing additional expert options to SuperLU. See SuperLU user guide [SLU] (section 2.4 on the ‘Options’ argument) for more details. For example, you can specify options=dict(Equil=False, IterRefine='SINGLE')) to turn equilibration off and perform a single iterative refinement.

Returns :

invA : scipy.sparse.linalg.dsolve._superlu.SciPyLUType

Object, which has a solve method.

See also

spilu
incomplete LU decomposition

Notes

This function uses the SuperLU library.

References

[SLU]SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/
scipy.sparse.linalg.spsolve(A, b, permc_spec=None, use_umfpack=True)

Solve the sparse linear system Ax=b

scipy.sparse.linalg.svds(A, k=6, ncv=None, tol=0)

Compute k singular values/vectors for a sparse matrix using ARPACK.

Parameters :

A : sparse matrix

Array to compute the SVD on

k : int, optional

Number of singular values and vectors to compute.

ncv : integer

The number of Lanczos vectors generated ncv must be greater than k+1 and smaller than n; it is recommended that ncv > 2*k

tol : float, optional

Tolerance for singular values. Zero (default) means machine precision.

scipy.sparse.linalg.use_solver(**kwargs)
Valid keyword arguments with defaults (other ignored):
useUmfpack = True assumeSortedIndices = False

The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used.

Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices=True to gain some speed.

Table Of Contents

Previous topic

scipy.sparse.isspmatrix_dia

This Page