Optimization Toolbox 
Solve nonlinear leastsquares (nonlinear datafitting) problem
Syntax
x = lsqnonlin(fun,x0) x = lsqnonlin(fun,x0,lb,ub) x = lsqnonlin(fun,x0,lb,ub,options) x = lsqnonlin(fun,x0,eb,ub,options,P1,P2, ... ) [x,resnorm] = lsqnonlin(...) [x,resnorm,residual] = lsqnonlin(...) [x,resnorm,residual,exitflag] = lsqnonlin(...) [x,resnorm,residual,exitflag,output] = lsqnonlin(...) [x,resnorm,residual,exitflag,output,lambda] = lsqnonlin(...) [x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(...)
Description
lsqnonlin
solves nonlinear leastsquares problems, including nonlinear datafitting problems.
Rather than compute the value f(x) (the "sum of squares"), lsqnonlin
requires the userdefined function to compute the vectorvalued function
Then, in vector terms, this optimization problem may be restated as
where x is a vector and F(x) is a function that returns a vector value.
x = lsqnonlin(fun,x0)
starts at the point x0
and finds a minimum to the sum of squares of the functions described in fun
. fun
should return a vector of values and not the sumofsquares of the values. (fun(x)
is summed and squared implicitly in the algorithm.)
x = lsqnonlin(fun,x0,lb,ub)
defines a set of lower and upper bounds on the design variables, x
, so that the solution is always in the range lb <= x <= ub
.
x = lsqnonlin(fun,x0,lb,ub,options)
minimizes with the optimization parameters specified in the structure options
. Use optimset
to set these parameters. Pass empty matrices for lb
an ub
if no bounds exist.
x = lsqnonlin(fun,x0,lb,ub,options,P1,P2,...)
passes the problemdependent parameters P1
, P2
, etc., directly to the function fun
. Pass an empty matrix for options
to use the default values for options
.
[x,resnorm] = lsqnonlin(...)
returns the value of the squared 2norm of the residual at x
: sum(fun(x).^2)
.
[x,resnorm,residual] = lsqnonlin(...)
returns the value of the residual, fun(x)
, at the solution x
.
[x,resnorm,residual,exitflag] = lsqnonlin(...)
returns a value exitflag
that describes the exit condition.
[x,resnorm,residual,exitflag,output] = lsqnonlin(...)
returns a structure output
that contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda] = lsqnonlin(...)
returns a structure lambda
whose fields contain the Lagrange multipliers at the solution x
.
[x,resnorm,residual,exitflag,output,lambda,jacobian] =
lsqnonlin(...)
returns the Jacobian of fun
at the solution x
.
Input Arguments
Function Arguments contains general descriptions of arguments passed in to lsqnonlin
. This section provides functionspecific details for fun
and options
:
fun 
The function whose sumofsquares is minimized. fun is a function that accepts a vector x and returns a vector F , the objective functions evaluated at x . The function fun can be specified as a function handle.where myfun is a MATLAB function such asfun can also be an inline object.If the Jacobian can also be computed and the Jacobian parameter is 'on' , set bythen the function fun must return, in a second output argument, the Jacobian value J , a matrix, at x . Note that by checking the value of nargout the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J ).
fun returns a vector (matrix) of m components and x has length n , where n is the length of x0 , then the Jacobian J is an mbyn matrix where J(i,j) is the partial derivative of F(i) with respect to x(j) . (Note that the Jacobian J is the transpose of the gradient of F .) 
options 
Options provides the functionspecific details for the options parameters. 
Output Arguments
Function Arguments contains general descriptions of arguments returned by lsqnonlin
. This section provides functionspecific details for exitflag
, lambda
, and output
:
exitflag 

> 0 
The function converged to a solution x . 

0 
The maximum number of function evaluations or iterations was exceeded. 

< 0 
The function did not converge to a solution. 

lambda 
Structure containing the Lagrange multipliers at the solution  
lower 
Lower bounds lb 

upper 
Upper bounds ub 

output 
Structure containing information about the optimization. The fields are:  
iterations 
Number of iterations taken 


The number of function evaluations 

algorithm 
Algorithm used 

cgiterations 
Number of PCG iterations (largescale algorithm only) 


The final step size taken (mediumscale algorithm only) 

firstorderopt 
Measure of firstorder optimality (largescale algorithm only) For largescale bound constrained problems, the firstorder optimality is the infinity norm of v.*g , where v is defined as in Box Constraints, and g is the gradient g = J^{T}F (see Nonlinear LeastSquares). 
Note The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See the example below. 
Options
Optimization parameter options. You can set or change the values of these parameters using the optimset
function. Some parameters apply to all algorithms, some are only relevant when using the largescale algorithm, and others are only relevant when using the mediumscale algorithm. See Optimization Parameters for detailed information.
We start by describing the LargeScale
option since it states a preference for which algorithm to use. It is only a preference because certain conditions must be met to use the largescale or mediumscale algorithm. For the largescale algorithm, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F
returned by fun
) must be at least as many as the length of x
. Furthermore, only the largescale algorithm handles bound constraints:
LargeScale 
Use largescale algorithm if possible when set to 'on' . Use mediumscale algorithm when set to 'off' . 
MediumScale and LargeScale Algorithms. These parameters are used by both the mediumscale and largescale algorithms:
Diagnostics 
Print diagnostic information about the function to be minimized. 
Display 
Level of display. 'off' displays no output; 'iter' displays output at each iteration; 'final' (default) displays just the final output. 
Jacobian 
If 'on' , lsqnonlin uses a userdefined Jacobian (defined in fun ), or Jacobian information (when using JacobMult ), for the objective function. If 'off' , lsqnonlin approximates the Jacobian using finite differences. 
MaxFunEvals 
Maximum number of function evaluations allowed. 
MaxIter 
Maximum number of iterations allowed. 
TolFun 
Termination tolerance on the function value. 
TolX 
Termination tolerance on x . 
LargeScale Algorithm Only. These parameters are used only by the largescale algorithm:
JacobMult 
Function handle for Jacobian multiply function. For largescale structured problems, this function computes the Jacobian matrix products J*Y , J'*Y , or J'*(J*Y) without actually forming J . The function is of the formwhere Jinfo and the additional parameters p1,p2,... contain the matrices used to compute J*Y (or J'*Y , or J'*(J*Y)) . The first argument Jinfo must be the same as the second argument returned by the objective function fun .The parameters p1,p2,... are the same additional parameters that are passed to lsqnonlin (and to fun ).Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute. If flag == 0 then W = J'*(J*Y) . If flag > 0 then W = J*Y . If flag < 0 then W = J'*Y . In each case, J is not formed explicitly. lsqnonlin uses Jinfo to compute the preconditioner. 
See Nonlinear Minimization with a Dense but Structured Hessian and Equality Constraints for a similar example. 

JacobPattern 
Sparsity pattern of the Jacobian for finitedifferencing. If it is not convenient to compute the Jacobian matrix J in fun , lsqnonlin can approximate J via sparse finitedifferences provided the structure of J , i.e., locations of the nonzeros, is supplied as the value for JacobPattern . In the worst case, if the structure is unknown, you can set JacobPattern to be a dense matrix and a full finitedifference approximation is computed in each iteration (this is the default if JacobPattern is not set). This can be very expensive for large problems so it is usually worth the effort to determine the sparsity structure. 
MaxPCGIter 
Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below). 
PrecondBandWidth 
Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. 
TolPCG 
Termination tolerance on the PCG iteration. 
TypicalX 
Typical x values. 
MediumScale Algorithm Only. These parameters are used only by the mediumscale algorithm:
Examples
starting at the point x = [0.3, 0.4]
.
Because lsqnonlin
assumes that the sumofsquares is not explicitly formed in the user function, the function passed to lsqnonlin
should instead compute the vector valued function
for (that is, F
should have k
components).
First, write an Mfile to compute the k
component vector F
.
Next, invoke an optimization routine.
After about 24 function evaluations, this example gives the solution
Algorithm
LargeScale Optimization. By default lsqnonlin
chooses the largescale algorithm. This algorithm is a subspace trust region method and is based on the interiorreflective Newton method described in [1], [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See TrustRegion Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.
MediumScale Optimization. lsqnonlin
, with the LargeScale
parameter set to 'off'
with optimset
, uses the LevenbergMarquardt method with linesearch [4], [5], [6]. Alternatively, a GaussNewton method [3] with linesearch may be selected. The choice of algorithm is made by setting the LevenbergMarquardt
parameter. Setting LevenbergMarquardt
to 'off'
(and LargeScale
to 'off'
) selects the GaussNewton method, which is generally faster when the residual is small.
The default line search algorithm, i.e., the LineSearchType
parameter set to 'quadcubic'
, is a safeguarded mixed quadratic and cubic polynomial interpolation and extrapolation method. A safeguarded cubic polynomial method can be selected by setting the LineSearchType
parameter to 'cubicpoly'
. This method generally requires fewer function evaluations but more gradient evaluations. Thus, if gradients are being supplied and can be calculated inexpensively, the cubic polynomial line search method is preferable. The algorithms used are described fully in the Standard Algorithms chapter.
Diagnostics
LargeScale Optimization. The largescale code does not allow equal upper and lower bounds. For example if lb(2)==ub(2)
then lsqlin
gives the error
(lsqnonlin
does not handle equality constraints, which is another way to formulate equal bounds. If equality constraints are present, use fmincon
, fminimax
or fgoalattain
for alternative formulations where equality constraints can be included.)
Limitations
The function to be minimized must be continuous. lsqnonlin
may only give local solutions.
lsqnonlin
only handles real variables. When x has complex variables, the variables must be split into real and imaginary parts.
LargeScale Optimization. The largescale method for lsqnonlin
does not solve underdetermined systems; it requires that the number of equations (i.e., the number of elements of F) be at least as great as the number of variables. In the underdetermined case, the mediumscale algorithm is used instead. (If bound constraints exist, a warning is issued and the problem is solved with the bounds ignored.) See Table 24, LargeScale Problem Coverage and Requirements,, for more information on what problem formulations are covered and what information must be provided.
The preconditioner computation used in the preconditioned conjugate gradient part of the largescale method forms J^{T}J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J^{T}J, may lead to a costly solution process for large problems.
If components of x have no upper (or lower) bounds, then lsqnonlin
prefers that the corresponding components of ub
(or lb
) be set to inf
(or inf
for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.
Currently, if the analytical Jacobian is provided in fun
, the options
parameter DerivativeCheck
cannot be used with the largescale method to compare the analytic Jacobian to the finitedifference Jacobian. Instead, use the mediumscale method to check the derivatives with options
parameter MaxIter
set to 0 iterations. Then run the problem with the largescale method.
MediumScale Optimization. The mediumscale algorithm does not handle bound constraints.
Since the largescale algorithm does not handle underdetermined systems and the mediumscale does not handle bound constraints, problems with both these characteristics cannot be solved by lsqnonlin
.
See Also
@
(function_handle
), lsqcurvefit
, lsqlin
, optimset
References
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418445, 1996.
[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189224, 1994.
[3] Dennis, J.E., Jr., "Nonlinear LeastSquares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269312, 1977.
[4] Levenberg, K.,"A Method for the Solution of Certain Problems in LeastSquares," Quarterly Applied Math. 2, pp. 164168, 1944.
[5] Marquardt, D.,"An Algorithm for LeastSquares Estimation of Nonlinear Parameters," SIAM Journal Applied Math. Vol. 11, pp. 431441, 1963.
[6] Moré, J.J., "The LevenbergMarquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105116, 1977.
lsqlin  lsqnonneg 