Sparse linear algebra (scipy.sparse.linalg)

Warning

This documentation is work-in-progress and unorganized.

Sparse Linear Algebra

The submodules of sparse.linalg:
  1. eigen: sparse eigenvalue problem solvers
  2. isolve: iterative methods for solving linear systems
  3. dsolve: direct factorization methods for solving linear systems

Examples

class scipy.sparse.linalg.LinearOperator(shape, matvec, rmatvec=None, matmat=None, dtype=None)

Common interface for performing matrix vector products

Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrix-like objects.

Parameters :

shape : tuple

Matrix dimensions (M,N)

matvec : callable f(v)

Returns returns A * v.

See also

aslinearoperator
Construct LinearOperators

Notes

The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.

Examples

>>> from scipy.sparse.linalg import LinearOperator
>>> from scipy import *
>>> def mv(v):
...     return array([ 2*v[0], 3*v[1]])
...
>>> A = LinearOperator( (2,2), matvec=mv )
>>> A
<2x2 LinearOperator with unspecified dtype>
>>> A.matvec( ones(2) )
array([ 2.,  3.])
>>> A * ones(2)
array([ 2.,  3.])

Methods

matmat
matvec
matmat(X)

Matrix-matrix multiplication

Performs the operation y=A*X where A is an MxN linear operator and X dense N*K matrix or ndarray.

Parameters :

X : {matrix, ndarray}

An array with shape (N,K).

Returns :

Y : {matrix, ndarray}

A matrix or ndarray with shape (M,K) depending on the type of the X argument.

Notes

This matmat wraps any user-specified matmat routine to ensure that y has the correct type.

matvec(x)

Matrix-vector multiplication

Performs the operation y=A*x where A is an MxN linear operator and x is a column vector or rank-1 array.

Parameters :

x : {matrix, ndarray}

An array with shape (N,) or (N,1).

Returns :

y : {matrix, ndarray}

A matrix or ndarray with shape (M,) or (M,1) depending on the type and shape of the x argument.

Notes

This matvec wraps the user-specified matvec routine to ensure that y has the correct shape and type.

scipy.sparse.linalg.Tester

alias of NoseTester

scipy.sparse.linalg.aslinearoperator(A)

Return A as a LinearOperator.

‘A’ may be any of the following types:
  • ndarray
  • matrix
  • sparse matrix (e.g. csr_matrix, lil_matrix, etc.)
  • LinearOperator
  • An object with .shape and .matvec attributes

See the LinearOperator documentation for additonal information.

Examples

>>> from scipy import matrix
>>> M = matrix( [[1,2,3],[4,5,6]], dtype='int32' )
>>> aslinearoperator( M )
<2x3 LinearOperator with dtype=int32>
scipy.sparse.linalg.bicg(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use BIConjugate Gradient iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

scipy.sparse.linalg.bicgstab(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use BIConjugate Gradient STABilized iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

scipy.sparse.linalg.cg(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use Conjugate Gradient iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

scipy.sparse.linalg.cgs(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None)

Use Conjugate Gradient Squared iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

class scipy.sparse.linalg.csc_matrix(arg1, shape=None, dtype=None, copy=False, dims=None, nzmax=None)

Compressed Sparse Column matrix

This can be instantiated in several ways:
csc_matrix(D)
with a dense matrix or rank-2 ndarray D
csc_matrix(S)
with another sparse matrix S (equivalent to S.tocsc())
csc_matrix((M, N), [dtype])
to construct an empty matrix with shape (M, N) dtype is optional, defaulting to dtype=’d’.
csc_matrix((data, ij), [shape=(M, N)])
where data and ij satisfy the relationship a[ij[0, k], ij[1, k]] = data[k]
csc_matrix((data, indices, indptr), [shape=(M, N)])
is the standard CSC representation where the row indices for column i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. If the shape parameter is not supplied, the matrix dimensions are inferred from the index arrays.

Notes

Advantages of the CSC format
  • efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
  • efficient column slicing
  • fast matrix vector products (CSR, BSR may be faster)
Disadvantages of the CSC format
  • slow row slicing operations (consider CSR)
  • changes to the sparsity structure are expensive (consider LIL or DOK)

Examples

>>> from scipy.sparse import *
>>> from scipy import *
>>> csc_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
        [0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=int8)
>>> row = array([0,2,2,0,1,2])
>>> col = array([0,0,1,2,2,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 4],
        [0, 0, 5],
        [2, 3, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 4],
        [0, 0, 5],
        [2, 3, 6]])

Methods

asformat
asfptype
astype
check_format
conj
conjugate
copy Generic (shallow and deep) copying operations.
diagonal
dot
eliminate_zeros
ensure_sorted_indices
getH
get_shape
getcol
getdata
getformat
getmaxprint
getnnz
getrow
listprint
matmat
matvec
mean
multiply
nonzero
prune
reshape
rmatvec
rowcol
save
set_shape
setdiag
sort_indices
sorted_indices
sum
sum_duplicates
toarray
tobsr
tocoo
tocsc
tocsr
todense
todia
todok
tolil
transpose
rowcol(*args, **kwds)

rowcol is deprecated!

class scipy.sparse.linalg.csr_matrix(arg1, shape=None, dtype=None, copy=False, dims=None, nzmax=None)

Compressed Sparse Row matrix

This can be instantiated in several ways:
csr_matrix(D)
with a dense matrix or rank-2 ndarray D
csr_matrix(S)
with another sparse matrix S (equivalent to S.tocsr())
csr_matrix((M, N), [dtype])
to construct an empty matrix with shape (M, N) dtype is optional, defaulting to dtype=’d’.
csr_matrix((data, ij), [shape=(M, N)])
where data and ij satisfy the relationship a[ij[0, k], ij[1, k]] = data[k]
csr_matrix((data, indices, indptr), [shape=(M, N)])
is the standard CSR representation where the column indices for row i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. If the shape parameter is not supplied, the matrix dimensions are inferred from the index arrays.

Notes

Advantages of the CSR format
  • efficient arithmetic operations CSR + CSR, CSR * CSR, etc.
  • efficient row slicing
  • fast matrix vector products
Disadvantages of the CSR format
  • slow column slicing operations (consider CSC)
  • changes to the sparsity structure are expensive (consider LIL or DOK)

Examples

>>> from scipy.sparse import *
>>> from scipy import *
>>> csr_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
        [0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=int8)
>>> row = array([0,0,1,2,2,2])
>>> col = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 2],
        [0, 0, 3],
        [4, 5, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 2],
        [0, 0, 3],
        [4, 5, 6]])

Methods

asformat
asfptype
astype
check_format
conj
conjugate
copy Generic (shallow and deep) copying operations.
diagonal
dot
eliminate_zeros
ensure_sorted_indices
getH
get_shape
getcol
getdata
getformat
getmaxprint
getnnz
getrow
listprint
matmat
matvec
mean
multiply
nonzero
prune
reshape
rmatvec
rowcol
save
set_shape
setdiag
sort_indices
sorted_indices
sum
sum_duplicates
toarray
tobsr
tocoo
tocsc
tocsr
todense
todia
todok
tolil
transpose
rowcol(*args, **kwds)

rowcol is deprecated!

scipy.sparse.linalg.eigen(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True)

Find k eigenvalues and eigenvectors of the square matrix A.

Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].

Parameters :

A : matrix, array, or object with matvec(x) method

An N x N matrix, array, or an object with matvec(x) method to perform the matrix vector product A * x. The sparse matrix formats in scipy.sparse are appropriate for A.

k : integer

The number of eigenvalues and eigenvectors desired

Returns :

w : array

Array of k eigenvalues

v : array

An array of k eigenvectors The v[i] is the eigenvector corresponding to the eigenvector w[i]

See also

eigen_symmetric
eigenvalues and eigenvectors for symmetric matrix A
scipy.sparse.linalg.eigen_symmetric(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True)

Find k eigenvalues and eigenvectors of the real symmetric square matrix A.

Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].

Parameters :

A : matrix or array with real entries or object with matvec(x) method

An N x N real symmetric matrix or array or an object with matvec(x) method to perform the matrix vector product A * x. The sparse matrix formats in scipy.sparse are appropriate for A.

k : integer

The number of eigenvalues and eigenvectors desired

Returns :

w : array

Array of k eigenvalues

v : array

An array of k eigenvectors The v[i] is the eigenvector corresponding to the eigenvector w[i]

See also

eigen
eigenvalues and eigenvectors for a general (nonsymmetric) matrix A
scipy.sparse.linalg.factorized(A)

Return a fuction for solving a sparse linear system, with A pre-factorized.

Example:
solve = factorized( A ) # Makes LU decomposition. x1 = solve( rhs1 ) # Uses the LU factors. x2 = solve( rhs2 ) # Uses again the LU factors.
scipy.sparse.linalg.gmres(A, b, x0=None, tol=1.0000000000000001e-05, restart=None, maxiter=None, xtype=None, M=None, callback=None, restrt=None)

Use Generalized Minimal RESidual iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

See also

LinearOperator

scipy.sparse.linalg.lgmres(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=1000, M=None, callback=None, inner_m=30, outer_k=3, outer_v=None, store_outer_Av=True)

Solve a matrix equation using the LGMRES algorithm.

The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

x0 : {array, matrix}

Starting guess for the solution.

tol : float

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol.

maxiter : integer

Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

M : {sparse matrix, dense matrix, LinearOperator}

Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance.

callback : function

User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector.

Returns :

x : array or matrix

The converged solution.

info : integer

Provides convergence information:

0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown

Notes

The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.

Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.

References

[BJM](1, 2, 3) A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005).
[BPh](1, 2, 3) A.H. Baker, PhD thesis, University of Colorado (2003). http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps
scipy.sparse.linalg.lobpcg(A, X, B=None, M=None, Y=None, tol=None, maxiter=20, largest=True, verbosityLevel=0, retLambdaHistory=False, retResidualNormsHistory=False)

Solve symmetric partial eigenproblems with optional preconditioning

This function implements the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The symmetric linear operator of the problem, usually a sparse matrix. Often called the “stiffness matrix”.

X : array_like

Initial approximation to the k eigenvectors. If A has shape=(n,n) then X should have shape shape=(n,k).

Returns :

w : array

Array of k eigenvalues

v : array

An array of k eigenvectors. V has the same shape as X.

Notes

If both retLambdaHistory and retResidualNormsHistory are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)

scipy.sparse.linalg.lsqr(A, b, damp=0.0, atol=1e-08, btol=1e-08, conlim=100000000.0, iter_lim=None, show=False, calc_var=False)

Find the least-squares solution to a large, sparse, linear system of equations.

The function solves Ax = b or min ||b - Ax||^2 or ``min ||Ax - b||^2 + d^2 ||x||^2.

The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.

1. Unsymmetric equations --    solve  A*x = b

2. Linear least squares  --    solve  A*x = b
                               in the least-squares sense

3. Damped least squares  --    solve  (   A    )*x = ( b )
                                      ( damp*I )     ( 0 )
                               in the least-squares sense
Parameters :

A : {sparse matrix, ndarray, LinearOperatorLinear}

Representation of an mxn matrix. It is required that the linear operator can produce Ax and A^T x.

b : (m,) ndarray

Right-hand side vector b.

damp : float

Damping coefficient.

atol, btol : float

Stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.)

conlim : float

Another stopping tolerance. lsqr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive.

iter_lim : int

Explicit limitation on number of iterations (for safety).

show : bool

Display an iteration log.

calc_var : bool

Whether to estimate diagonals of (A'A + damp^2*I)^{-1}.

Returns :

x : ndarray of float

The final solution.

istop : int

Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the least-squares problem.

itn : int

Iteration number upon termination.

r1norm : float

norm(r), where r = b - Ax.

r2norm : float

sqrt( norm(r)^2  +  damp^2 * norm(x)^2 ). Equal to r1norm if damp == 0.

anorm : float

Estimate of Frobenius norm of Abar = [[A]; [damp*I]].

acond : float

Estimate of cond(Abar).

arnorm : float

Estimate of norm(A'*r - damp^2*x).

xnorm : float

norm(x)

var : ndarray of float

If calc_var is True, estimates all diagonals of (A'A)^{-1} (if damp == 0) or more generally (A'A + damp^2*I)^{-1}. This is well defined if A has full column rank or damp > 0. (Not sure what var means if rank(A) < n and damp = 0.)

Notes

LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.

For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.

In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).

In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.

The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.

If some initial estimate x0 is known and if damp == 0, one could proceed as follows:

  1. Compute a residual vector r0 = b - A*x0.
  2. Use LSQR to solve the system A*dx = r0.
  3. Add the correction dx to obtain a final solution x = x0 + dx.

This requires that x0 be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.

Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system M*x = b efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system A*M(inverse)*z = b, after which x can be recovered by solving M*x = z.

If A is symmetric, LSQR should not be used!

Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).

References

[R112]C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 43-71.
[R113]C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195-209.
[R114]M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588-604.
scipy.sparse.linalg.minres(A, b, x0=None, shift=0.0, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None, show=False, check=False)

Use MINimum RESidual iteration to solve Ax=b

MINRES minimizes norm(A*x - b) for the symmetric matrix A. Unlike the Conjugate Gradient method, A can be indefinite or singular.

If shift != 0 then the method solves (A - shift*I)x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

Notes

THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE!

References

Solution of sparse indefinite systems of linear equations,
C. C. Paige and M. A. Saunders (1975), SIAM J. Numer. Anal. 12(4), pp. 617-629. http://www.stanford.edu/group/SOL/software/minres.html
This file is a translation of the following MATLAB implementation:
http://www.stanford.edu/group/SOL/software/minres/matlab/
scipy.sparse.linalg.qmr(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M1=None, M2=None, callback=None)

Use Quasi-Minimal Residual iteration to solve A x = b

Parameters :

A : {sparse matrix, dense matrix, LinearOperator}

The N-by-N matrix of the linear system.

b : {array, matrix}

Right hand side of the linear system. Has shape (N,) or (N,1).

See also

LinearOperator

scipy.sparse.linalg.spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=None)

Compute an incomplete LU decomposition for a sparse, square matrix A.

The resulting object is an approximation to the inverse of A.

Parameters :

A :

Sparse matrix to factorize

drop_tol : float, optional

Drop tolerance (0 <= tol <= 1) for an incomplete LU decomposition. (default: 1e-4)

fill_factor : float, optional

Specifies the fill ratio upper bound (>= 1.0) for ILU. (default: 10)

drop_rule : str, optional

Comma-separated string of drop rules to use. Available rules: basic, prows, column, area, secondary, dynamic, interp. (Default: basic,area)

See SuperLU documentation for details.

milu : str, optional

Which version of modified ILU to use. (Choices: silu, smilu_1, smilu_2 (default), smilu_3.)

Remaining other options :

Same as for splu

Returns :

invA_approx : scipy.sparse.linalg.dsolve._superlu.SciPyLUType

Object, which has a solve method.

See also

splu
complete LU decomposition

Notes

To improve the better approximation to the inverse, you may need to increase fill_factor AND decrease drop_tol.

This function uses the SuperLU library.

References

[SLU]SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/
scipy.sparse.linalg.splu(A, permc_spec=None, diag_pivot_thresh=None, drop_tol=None, relax=None, panel_size=None, options={})

Compute the LU decomposition of a sparse, square matrix.

Parameters :

A :

Sparse matrix to factorize. Should be in CSR or CSC format.

permc_spec : str, optional

How to permute the columns of the matrix for sparsity preservation. (default: ‘COLAMD’)

  • NATURAL: natural ordering.
  • MMD_ATA: minimum degree ordering on the structure of A^T A.
  • MMD_AT_PLUS_A: minimum degree ordering on the structure of A^T+A.
  • COLAMD: approximate minimum degree column ordering

diag_pivot_thresh : float, optional

Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user’s guide for details [SLU]

drop_tol : float, optional

(deprecated) No effect.

relax : int, optional

Expert option for customizing the degree of relaxing supernodes. See SuperLU user’s guide for details [SLU]

panel_size : int, optional

Expert option for customizing the panel size. See SuperLU user’s guide for details [SLU]

options : dict, optional

Dictionary containing additional expert options to SuperLU. See SuperLU user guide [SLU] (section 2.4 on the ‘Options’ argument) for more details. For example, you can specify options=dict(Equil=False, IterRefine='SINGLE')) to turn equilibration off and perform a single iterative refinement.

Returns :

invA : scipy.sparse.linalg.dsolve._superlu.SciPyLUType

Object, which has a solve method.

See also

spilu
incomplete LU decomposition

Notes

This function uses the SuperLU library.

References

[SLU]SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/
scipy.sparse.linalg.spsolve(A, b, permc_spec=None, use_umfpack=True)

Solve the sparse linear system Ax=b

scipy.sparse.linalg.svd(A, k=6)

Compute a few singular values/vectors for a sparse matrix using ARPACK.

Parameters :

A: sparse matrix :

Array to compute the SVD on.

k: int :

Number of singular values and vectors to compute.

scipy.sparse.linalg.use_solver(**kwargs)
Valid keyword arguments with defaults (other ignored):
useUmfpack = True assumeSortedIndices = False

The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used.

Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices=True to gain some speed.

Table Of Contents

Previous topic

scipy.sparse.isspmatrix_dia

This Page