Warning
This documentation is work-in-progress and unorganized.
ARPACK error
ARPACK iteration did not converge
Attributes
eigenvalues | ndarray | Partial result. Converged eigenvalues. |
eigenvectors | ndarray | Partial result. Converged eigenvectors. |
Common interface for performing matrix vector products
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrix-like objects.
Parameters : | shape : tuple
matvec : callable f(v)
|
---|---|
Other Parameters: | |
rmatvec : callable f(v)
matmat : callable f(V)
dtype : dtype
|
See also
Notes
The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.
Examples
>>> from scipy.sparse.linalg import LinearOperator
>>> from scipy import *
>>> def mv(v):
... return array([ 2*v[0], 3*v[1]])
...
>>> A = LinearOperator( (2,2), matvec=mv )
>>> A
<2x2 LinearOperator with unspecified dtype>
>>> A.matvec( ones(2) )
array([ 2., 3.])
>>> A * ones(2)
array([ 2., 3.])
Methods
matmat | |
matvec |
Matrix-matrix multiplication
Performs the operation y=A*X where A is an MxN linear operator and X dense N*K matrix or ndarray.
Parameters : | X : {matrix, ndarray}
|
---|---|
Returns : | Y : {matrix, ndarray}
|
Notes
This matmat wraps any user-specified matmat routine to ensure that y has the correct type.
Matrix-vector multiplication
Performs the operation y=A*x where A is an MxN linear operator and x is a column vector or rank-1 array.
Parameters : | x : {matrix, ndarray}
|
---|---|
Returns : | y : {matrix, ndarray}
|
Notes
This matvec wraps the user-specified matvec routine to ensure that y has the correct shape and type.
alias of NoseTester
Return A as a LinearOperator.
See the LinearOperator documentation for additonal information.
Examples
>>> from scipy import matrix
>>> M = matrix( [[1,2,3],[4,5,6]], dtype='int32' )
>>> aslinearoperator( M )
<2x3 LinearOperator with dtype=int32>
Use BIConjugate Gradient iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
Use BIConjugate Gradient STABilized iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
Use Conjugate Gradient iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
Use Conjugate Gradient Squared iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
Compressed Sparse Column matrix
Notes
Examples
>>> from scipy.sparse import *
>>> from scipy import *
>>> csc_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]], dtype=int8)
>>> row = array([0,2,2,0,1,2])
>>> col = array([0,0,1,2,2,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 4],
[0, 0, 5],
[2, 3, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 4],
[0, 0, 5],
[2, 3, 6]])
Methods
asformat | |
asfptype | |
astype | |
check_format | |
conj | |
conjugate | |
copy | Generic (shallow and deep) copying operations. |
diagonal | |
dot | |
eliminate_zeros | |
getH | |
get_shape | |
getcol | |
getformat | |
getmaxprint | |
getnnz | |
getrow | |
mean | |
multiply | |
nonzero | |
prune | |
reshape | |
set_shape | |
setdiag | |
sort_indices | |
sorted_indices | |
sum | |
sum_duplicates | |
toarray | |
tobsr | |
tocoo | |
tocsc | |
tocsr | |
todense | |
todia | |
todok | |
tolil | |
transpose |
Compressed Sparse Row matrix
Notes
Examples
>>> from scipy.sparse import *
>>> from scipy import *
>>> csr_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]], dtype=int8)
>>> row = array([0,0,1,2,2,2])
>>> col = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])
Methods
asformat | |
asfptype | |
astype | |
check_format | |
conj | |
conjugate | |
copy | Generic (shallow and deep) copying operations. |
diagonal | |
dot | |
eliminate_zeros | |
getH | |
get_shape | |
getcol | |
getformat | |
getmaxprint | |
getnnz | |
getrow | |
mean | |
multiply | |
nonzero | |
prune | |
reshape | |
set_shape | |
setdiag | |
sort_indices | |
sorted_indices | |
sum | |
sum_duplicates | |
toarray | |
tobsr | |
tocoo | |
tocsc | |
tocsr | |
todense | |
todia | |
todok | |
tolil | |
transpose |
Find k eigenvalues and eigenvectors of the square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters : | A : matrix, array, or object with matvec(x) method
k : integer
|
---|---|
Returns : | w : array
v : array
|
Other Parameters: | |
M : matrix or array
sigma : real or complex
v0 : array
ncv : integer
which : string
maxiter : integer
tol : float
return_eigenvectors : boolean
|
|
Raises : | ArpackNoConvergence :
|
See also
Notes
This function is a wrapper to the ARPACK [R162] SNEUPD, DNEUPD, CNEUPD, ZNEUPD, functions which use the Implicitly Restarted Arnoldi Method to find the eigenvalues and eigenvectors [R163].
References
[R162] | (1, 2) ARPACK Software, http://www.caam.rice.edu/software/ARPACK/ |
[R163] | (1, 2) R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998. |
Examples
Find 6 eigenvectors of the identity matrix:
>>> id = np.identity(13)
>>> vals, vecs = sp.sparse.linalg.eigs(id, k=6)
>>> vals
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j])
>>> vecs.shape
(13, 6)
Find k eigenvalues and eigenvectors of the real symmetric square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters : | A : matrix or array with real entries or object with matvec(x) method
k : integer
|
---|---|
Returns : | w : array
v : array
|
Other Parameters: | |
M : matrix or array
sigma : real
v0 : array
ncv : integer
which : string
maxiter : integer
tol : float
return_eigenvectors : boolean
|
|
Raises : | ArpackNoConvergence :
|
See also
Notes
This function is a wrapper to the ARPACK [R164] SSEUPD and DSEUPD functions which use the Implicitly Restarted Lanczos Method to find the eigenvalues and eigenvectors [R165].
References
[R164] | (1, 2) ARPACK Software, http://www.caam.rice.edu/software/ARPACK/ |
[R165] | (1, 2) R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998. |
Examples
>>> id = np.identity(13)
>>> vals, vecs = sp.sparse.linalg.eigsh(id, k=6)
>>> vals
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j])
>>> vecs.shape
(13, 6)
Return a fuction for solving a sparse linear system, with A pre-factorized.
Use Generalized Minimal RESidual iteration to solve A x = b.
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : int
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
restart : int, optional
maxiter : int, optional
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
|
See also
Notes
A preconditioner, P, is chosen such that P is close to A but easy to solve for. The preconditioner parameter required by this routine is M = P^-1. The inverse should preferably not be calculated explicitly. Rather, use the following template to produce M:
# Construct a linear operator that computes P^-1 * x.
import scipy.sparse.linalg as spla
M_x = lambda x: spla.spsolve(P, x)
M = spla.LinearOperator((n, n), M_x)
Solve a matrix equation using the LGMRES algorithm.
The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
|
---|---|
Returns : | x : array or matrix
info : integer
|
Notes
The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.
Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.
References
[BJM] | (1, 2, 3) A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005). |
[BPh] | (1, 2, 3) A.H. Baker, PhD thesis, University of Colorado (2003). http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps |
Solve symmetric partial eigenproblems with optional preconditioning
This function implements the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
X : array_like
B : {dense matrix, sparse matrix, LinearOperator}, optional
M : {dense matrix, sparse matrix, LinearOperator}, optional
Y : array_like, optional
|
---|---|
Returns : | w : array
v : array
|
Other Parameters: | |
tol : scalar, optional
maxiter: integer, optional :
largest : boolean, optional
verbosityLevel : integer, optional
retLambdaHistory : boolean, optional
retResidualNormsHistory : boolean, optional
|
Notes
If both retLambdaHistory and retResidualNormsHistory are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)
Find the least-squares solution to a large, sparse, linear system of equations.
The function solves Ax = b or min ||b - Ax||^2 or ``min ||Ax - b||^2 + d^2 ||x||^2.
The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.
1. Unsymmetric equations -- solve A*x = b
2. Linear least squares -- solve A*x = b
in the least-squares sense
3. Damped least squares -- solve ( A )*x = ( b )
( damp*I ) ( 0 )
in the least-squares sense
Parameters : | A : {sparse matrix, ndarray, LinearOperatorLinear}
b : (m,) ndarray
damp : float
atol, btol : float
conlim : float
iter_lim : int
show : bool
calc_var : bool
|
---|---|
Returns : | x : ndarray of float
istop : int
itn : int
r1norm : float
r2norm : float
anorm : float
acond : float
arnorm : float
xnorm : float
var : ndarray of float
|
Notes
LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.
For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.
In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).
In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.
The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.
If some initial estimate x0 is known and if damp == 0, one could proceed as follows:
- Compute a residual vector r0 = b - A*x0.
- Use LSQR to solve the system A*dx = r0.
- Add the correction dx to obtain a final solution x = x0 + dx.
This requires that x0 be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.
Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system M*x = b efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system A*M(inverse)*z = b, after which x can be recovered by solving M*x = z.
If A is symmetric, LSQR should not be used!
Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).
References
[R166] | C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 43-71. |
[R167] | C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195-209. |
[R168] | M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588-604. |
Use MINimum RESidual iteration to solve Ax=b
MINRES minimizes norm(A*x - b) for the symmetric matrix A. Unlike the Conjugate Gradient method, A can be indefinite or singular.
If shift != 0 then the method solves (A - shift*I)x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
Notes
THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE!
References
Use Quasi-Minimal Residual iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|---|
Returns : | x : {array, matrix}
info : integer
|
Other Parameters: | |
x0 : {array, matrix}
tol : float
maxiter : integer
M1 : {sparse matrix, dense matrix, LinearOperator}
M2 : {sparse matrix, dense matrix, LinearOperator}
callback : function
xtype : {‘f’,’d’,’F’,’D’}
|
See also
Compute an incomplete LU decomposition for a sparse, square matrix A.
The resulting object is an approximation to the inverse of A.
Parameters : | A :
drop_tol : float, optional
fill_factor : float, optional
drop_rule : str, optional
milu : str, optional
Remaining other options :
|
---|---|
Returns : | invA_approx : scipy.sparse.linalg.dsolve._superlu.SciPyLUType
|
See also
Notes
To improve the better approximation to the inverse, you may need to increase fill_factor AND decrease drop_tol.
This function uses the SuperLU library.
References
[SLU] | SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ |
Compute the LU decomposition of a sparse, square matrix.
Parameters : | A :
permc_spec : str, optional
diag_pivot_thresh : float, optional
drop_tol : float, optional
relax : int, optional
panel_size : int, optional
options : dict, optional
|
---|---|
Returns : | invA : scipy.sparse.linalg.dsolve._superlu.SciPyLUType
|
See also
Notes
This function uses the SuperLU library.
References
[SLU] | SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ |
Solve the sparse linear system Ax=b
Compute k singular values/vectors for a sparse matrix using ARPACK.
Parameters : | A : sparse matrix
k : int, optional
ncv : integer
tol : float, optional
|
---|
The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used.
Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices=True to gain some speed.