Warning
This documentation is work-in-progress and unorganized.
Common interface for performing matrix vector products
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrix-like objects.
Parameters : | shape : tuple
matvec : callable f(v)
|
---|
See also
Notes
The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.
Examples
>>> from scipy.sparse.linalg import LinearOperator
>>> from scipy import *
>>> def mv(v):
... return array([ 2*v[0], 3*v[1]])
...
>>> A = LinearOperator( (2,2), matvec=mv )
>>> A
<2x2 LinearOperator with unspecified dtype>
>>> A.matvec( ones(2) )
array([ 2., 3.])
>>> A * ones(2)
array([ 2., 3.])
Methods
matmat | |
matvec |
Matrix-matrix multiplication
Performs the operation y=A*X where A is an MxN linear operator and X dense N*K matrix or ndarray.
Parameters : | X : {matrix, ndarray}
|
---|---|
Returns : | Y : {matrix, ndarray}
|
Notes
This matmat wraps any user-specified matmat routine to ensure that y has the correct type.
Matrix-vector multiplication
Performs the operation y=A*x where A is an MxN linear operator and x is a column vector or rank-1 array.
Parameters : | x : {matrix, ndarray}
|
---|---|
Returns : | y : {matrix, ndarray}
|
Notes
This matvec wraps the user-specified matvec routine to ensure that y has the correct shape and type.
alias of NoseTester
Return A as a LinearOperator.
See the LinearOperator documentation for additonal information.
Examples
>>> from scipy import matrix
>>> M = matrix( [[1,2,3],[4,5,6]], dtype='int32' )
>>> aslinearoperator( M )
<2x3 LinearOperator with dtype=int32>
Use BIConjugate Gradient iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
Use BIConjugate Gradient STABilized iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
Use Conjugate Gradient iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
Use Conjugate Gradient Squared iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
Compressed Sparse Column matrix
Notes
Examples
>>> from scipy.sparse import *
>>> from scipy import *
>>> csc_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]], dtype=int8)
>>> row = array([0,2,2,0,1,2])
>>> col = array([0,0,1,2,2,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 4],
[0, 0, 5],
[2, 3, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csc_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 4],
[0, 0, 5],
[2, 3, 6]])
Methods
asformat | |
asfptype | |
astype | |
check_format | |
conj | |
conjugate | |
copy | Generic (shallow and deep) copying operations. |
diagonal | |
dot | |
eliminate_zeros | |
ensure_sorted_indices | |
getH | |
get_shape | |
getcol | |
getdata | |
getformat | |
getmaxprint | |
getnnz | |
getrow | |
listprint | |
matmat | |
matvec | |
mean | |
multiply | |
nonzero | |
prune | |
reshape | |
rmatvec | |
rowcol | |
save | |
set_shape | |
setdiag | |
sort_indices | |
sorted_indices | |
sum | |
sum_duplicates | |
toarray | |
tobsr | |
tocoo | |
tocsc | |
tocsr | |
todense | |
todia | |
todok | |
tolil | |
transpose |
rowcol is deprecated!
Compressed Sparse Row matrix
Notes
Examples
>>> from scipy.sparse import *
>>> from scipy import *
>>> csr_matrix( (3,4), dtype=int8 ).todense()
matrix([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]], dtype=int8)
>>> row = array([0,0,1,2,2,2])
>>> col = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,(row,col)), shape=(3,3) ).todense()
matrix([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])
>>> indptr = array([0,2,3,6])
>>> indices = array([0,2,2,0,1,2])
>>> data = array([1,2,3,4,5,6])
>>> csr_matrix( (data,indices,indptr), shape=(3,3) ).todense()
matrix([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])
Methods
asformat | |
asfptype | |
astype | |
check_format | |
conj | |
conjugate | |
copy | Generic (shallow and deep) copying operations. |
diagonal | |
dot | |
eliminate_zeros | |
ensure_sorted_indices | |
getH | |
get_shape | |
getcol | |
getdata | |
getformat | |
getmaxprint | |
getnnz | |
getrow | |
listprint | |
matmat | |
matvec | |
mean | |
multiply | |
nonzero | |
prune | |
reshape | |
rmatvec | |
rowcol | |
save | |
set_shape | |
setdiag | |
sort_indices | |
sorted_indices | |
sum | |
sum_duplicates | |
toarray | |
tobsr | |
tocoo | |
tocsc | |
tocsr | |
todense | |
todia | |
todok | |
tolil | |
transpose |
rowcol is deprecated!
Find k eigenvalues and eigenvectors of the square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters : | A : matrix, array, or object with matvec(x) method
k : integer
|
---|---|
Returns : | w : array
v : array
|
See also
Find k eigenvalues and eigenvectors of the real symmetric square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters : | A : matrix or array with real entries or object with matvec(x) method
k : integer
|
---|---|
Returns : | w : array
v : array
|
See also
Return a fuction for solving a sparse linear system, with A pre-factorized.
Use Generalized Minimal RESidual iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
See also
Solve a matrix equation using the LGMRES algorithm.
The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function
|
---|---|
Returns : | x : array or matrix
info : integer
|
Notes
The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.
Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.
References
[BJM] | (1, 2, 3) A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005). |
[BPh] | (1, 2, 3) A.H. Baker, PhD thesis, University of Colorado (2003). http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps |
Solve symmetric partial eigenproblems with optional preconditioning
This function implements the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
X : array_like
|
---|---|
Returns : | w : array
v : array
|
Notes
If both retLambdaHistory and retResidualNormsHistory are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)
Find the least-squares solution to a large, sparse, linear system of equations.
The function solves Ax = b or min ||b - Ax||^2 or ``min ||Ax - b||^2 + d^2 ||x||^2.
The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.
1. Unsymmetric equations -- solve A*x = b
2. Linear least squares -- solve A*x = b
in the least-squares sense
3. Damped least squares -- solve ( A )*x = ( b )
( damp*I ) ( 0 )
in the least-squares sense
Parameters : | A : {sparse matrix, ndarray, LinearOperatorLinear}
b : (m,) ndarray
damp : float
atol, btol : float
conlim : float
iter_lim : int
show : bool
calc_var : bool
|
---|---|
Returns : | x : ndarray of float
istop : int
itn : int
r1norm : float
r2norm : float
anorm : float
acond : float
arnorm : float
xnorm : float
var : ndarray of float
|
Notes
LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.
For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.
In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).
In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.
The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.
If some initial estimate x0 is known and if damp == 0, one could proceed as follows:
- Compute a residual vector r0 = b - A*x0.
- Use LSQR to solve the system A*dx = r0.
- Add the correction dx to obtain a final solution x = x0 + dx.
This requires that x0 be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.
Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system M*x = b efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system A*M(inverse)*z = b, after which x can be recovered by solving M*x = z.
If A is symmetric, LSQR should not be used!
Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).
References
[R112] | C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 43-71. |
[R113] | C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195-209. |
[R114] | M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588-604. |
Use MINimum RESidual iteration to solve Ax=b
MINRES minimizes norm(A*x - b) for the symmetric matrix A. Unlike the Conjugate Gradient method, A can be indefinite or singular.
If shift != 0 then the method solves (A - shift*I)x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
Notes
THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE!
References
Use Quasi-Minimal Residual iteration to solve A x = b
Parameters : | A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
|
---|
See also
Compute an incomplete LU decomposition for a sparse, square matrix A.
The resulting object is an approximation to the inverse of A.
Parameters : | A :
drop_tol : float, optional
fill_factor : float, optional
drop_rule : str, optional
milu : str, optional
Remaining other options :
|
---|---|
Returns : | invA_approx : scipy.sparse.linalg.dsolve._superlu.SciPyLUType
|
See also
Notes
To improve the better approximation to the inverse, you may need to increase fill_factor AND decrease drop_tol.
This function uses the SuperLU library.
References
[SLU] | SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ |
Compute the LU decomposition of a sparse, square matrix.
Parameters : | A :
permc_spec : str, optional
diag_pivot_thresh : float, optional
drop_tol : float, optional
relax : int, optional
panel_size : int, optional
options : dict, optional
|
---|---|
Returns : | invA : scipy.sparse.linalg.dsolve._superlu.SciPyLUType
|
See also
Notes
This function uses the SuperLU library.
References
[SLU] | SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ |
Solve the sparse linear system Ax=b
Compute a few singular values/vectors for a sparse matrix using ARPACK.
Parameters : | A: sparse matrix :
k: int :
|
---|
The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used.
Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices=True to gain some speed.