MATLAB Function Reference    
lsqr

LSQR implementation of Conjugate Gradients on the Normal Equations

Syntax

Description

x = lsqr(A,b) attempts to solve the system of linear equations A*x=b for x if A is consistent, otherwise it attempts to solve the least squares solution x that minimizes norm(b-A*x). The m-by-n coefficient matrix A need not be square but it should be large and sparse. The column vector b must have length m. A can be a function afun such that afun(x) returns A*x and afun(x,'transp') returns A'*x.

If lsqr converges, a message to that effect is displayed. If lsqr fails to converge after the maximum number of iterations or halts for any reason, a warning message is printed displaying the relative residual norm(b-A*x)/norm(b) and the iteration number at which the method stopped or failed.

lsqr(A,b,tol) specifies the tolerance of the method. If tol is [], then lsqr uses the default, 1e-6.

lsqr(A,b,tol,maxit) specifies the maximum number of iterations. If maxit is [], then lsqr uses the default, min([m,n,20]).

lsqr(A,b,tol,maxit,M1) and lsqr(A,b,tol,maxit,M1,M2) use n-by-n preconditioner M or M = M1*M2 and effectively solve the system A*inv(M)*y = b for y, where x = M*y. If M is [] then lsqr applies no preconditioner. M can be a function mfun such that mfun(x) returns M\x and mfun(x,'transp') returns M'\x.

lsqr(A,b,tol,maxit,M1,M2,x0) specifies the n-by-1 initial guess. If x0 is [], then lsqr uses the default, an all zero vector.

lsqr(afun,b,tol,maxit,m1fun,m2fun,x0,p1,p2,...) passes parameters p1,p2,... to functions afun(x,p1,p2,...) and afun(x,p1,p2,...,'transp') and similarly to the preconditioner functions m1fun and m2fun.

[x,flag] = lsqr(A,b,tol,maxit,M1,M2,x0) also returns a convergence flag.

Flag
Convergence
0
lsqr converged to the desired tolerance tol within maxit iterations.
1
lsqr iterated maxit times but did not converge.
2
Preconditioner M was ill-conditioned.
3
lsqr stagnated. (Two consecutive iterates were the same.)
4

One of the scalar quantities calculated during lsqr became too small or too large to continue computing.

Whenever flag is not 0, the solution x returned is that with minimal norm residual computed over all the iterations. No messages are displayed if the flag output is specified.

[x,flag,relres] = lsqr(A,b,tol,maxit,M1,M2,x0) also returns an estimate of the relative residual norm(b-A*x)/norm(b). If flag is 0, relres <= tol.

[x,flag,relres,iter] = lsqr(A,b,tol,maxit,M1,M2,x0) also returns the iteration number at which x was computed, where 0 <= iter <= maxit.

[x,flag,relres,iter,resvec] = lsqr(A,b,tol,maxit,M1,M2,x0) also returns a vector of the residual norm estimates at each iteration, including norm(b-A*x0).

[x,flag,relres,iter,resvec,lsvec] = lsqr(A,b,tol,maxit,M1,M2,x0) also returns a vector of estimates of the scaled normal equations residual at each iteration: norm((A*inv(M))'*(B-A*X))/norm(A*inv(M),'fro'). Note that the estimate of norm(A*inv(M),'fro') changes, and hopefully improves, at each iteration.

Examples

Alternatively, use this matrix-vector product function

as input to lsqr

See Also

bicg, bicgstab, cgs, gmres, minres, norm, pcg, qmr, symmlq

@ (function handle)

References

[1]  Barrett, R., M. Berry, T. F. Chan, et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, 1994.

[2]  Paige, C. C. and M. A. Saunders, "LSQR: An Algorithm for Sparse Linear Equations And Sparse Least Squares," ACM Trans. Math. Soft., Vol.8, 1982, pp. 43-71.


  lsqnonneg lu