MATLAB Function Reference | ![]() ![]() |
LSQR implementation of Conjugate Gradients on the Normal Equations
Syntax
x = lsqr(A,b) lsqr(A,b,tol) lsqr(A,b,tol,maxit) lsqr(A,b,tol,maxit,M) lsqr(A,b,tol,maxit,M1,M2) lsqr(A,b,tol,maxit,M1,M2,x0) lsqr(afun,b,tol,maxit,m1fun,m2fun,x0,p1,p2,...) [x,flag] = lsqr(A,b,...) [x,flag,relres] = lsqr(A,b,...) [x,flag,relres,iter] = lsqr(A,b,...) [x,flag,relres,iter,resvec] = lsqr(A,b,...) [x,flag,relres,iter,resvec,lsvec] = lsqr(A,b,...)
Description
x = lsqr(A,b)
attempts to solve the system of linear equations A*x=b
for x
if A
is consistent, otherwise it attempts to solve the least squares solution x
that minimizes norm(b-A*x)
. The m
-by-n
coefficient matrix A
need not be square but it should be large and sparse. The column vector b
must have length m
. A
can be a function afun
such that afun(x)
returns A*x
and afun(x,'transp')
returns A'*x
.
If lsqr
converges, a message to that effect is displayed. If lsqr
fails to converge after the maximum number of iterations or halts for any reason, a warning message is printed displaying the relative residual norm(b-A*x)/norm(b)
and the iteration number at which the method stopped or failed.
lsqr(A,b,tol)
specifies the tolerance of the method. If tol
is []
, then lsqr
uses the default, 1e-6
.
lsqr(A,b,tol,maxit)
specifies the maximum number of iterations. If maxit
is []
, then lsqr
uses the default, min([m,n,20])
.
lsqr(A,b,tol,maxit,M1) and lsqr(A,b,tol,maxit,M1,M2)
use n
-by-n
preconditioner M
or M = M1*M2
and effectively solve the system A*inv(M)*y = b
for y
, where x = M*y
. If M
is []
then lsqr
applies no preconditioner. M
can be a function mfun
such that mfun(x)
returns M\x
and mfun(x,'transp')
returns M'\x
.
lsqr(A,b,tol,maxit,M1,M2,x0)
specifies the n
-by-1
initial guess. If x0
is []
, then lsqr
uses the default, an all zero vector.
lsqr(afun,b,tol,maxit,m1fun,m2fun,x0,p1,p2,...)
passes parameters p1,p2,...
to functions afun(x,p1,p2,...)
and afun(x,p1,p2,...,'transp')
and similarly to the preconditioner functions m1fun
and m2fun
.
[x,flag] = lsqr(A,b,tol,maxit,M1,M2,x0)
also returns a convergence flag.
Whenever flag
is not 0
, the solution x
returned is that with minimal norm residual computed over all the iterations. No messages are displayed if the flag
output is specified.
[x,flag,relres] = lsqr(A,b,tol,maxit,M1,M2,x0)
also returns an estimate of the relative residual norm(b-A*x)/norm(b)
. If flag
is 0
, relres <= tol
.
[x,flag,relres,iter] = lsqr(A,b,tol,maxit,M1,M2,x0)
also returns the iteration number at which x
was computed, where 0 <= iter <= maxit
.
[x,flag,relres,iter,resvec] = lsqr(A,b,tol,maxit,M1,M2,x0)
also returns a vector of the residual norm estimates at each iteration, including norm(b-A*x0)
.
[x,flag,relres,iter,resvec,lsvec] = lsqr(A,b,tol,maxit,M1,M2,x0)
also returns a vector of estimates of the scaled normal equations residual at each iteration: norm((A*inv(M))'*(B-A*X))/norm(A*inv(M),'fro')
. Note that the estimate of norm(A*inv(M),'fro')
changes, and hopefully improves, at each iteration.
Examples
n = 100; on = ones(n,1); A = spdiags([-2*on 4*on -on],-1:1,n,n); b = sum(A,2); tol = 1e-8; maxit = 15; M1 = spdiags([on/(-2) on],-1:0,n,n); M2 = spdiags([4*on -on],0:1,n,n); x = lsqr(A,b,tol,maxit,M1,M2,[]); lsqr converged at iteration 11 to a solution with relative residual 3.5e-009
Alternatively, use this matrix-vector product function
function y = afun(x,n,transp_flag) if (nargin > 2) & strcmp(transp_flag,'transp') y = 4 * x; y(1:n-1) = y(1:n-1) - 2 * x(2:n); y(2:n) = y(2:n) - x(1:n-1); else y = 4 * x; y(2:n) = y(2:n) - 2 * x(1:n-1); y(1:n-1) = y(1:n-1) - x(2:n); end
See Also
bicg
, bicgstab
, cgs
, gmres
, minres
, norm
, pcg
, qmr
, symmlq
@
(function handle)
References
[1] Barrett, R., M. Berry, T. F. Chan, et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, 1994.
[2] Paige, C. C. and M. A. Saunders, "LSQR: An Algorithm for Sparse Linear Equations And Sparse Least Squares," ACM Trans. Math. Soft., Vol.8, 1982, pp. 43-71.
![]() | lsqnonneg | lu | ![]() |