Optimization Toolbox | ![]() ![]() |
Using the Optimization Functions
Most of these optimization routines require the definition of an M-file containing the function to be minimized, i.e., the objective function. Alternatively, you can use an inline object created from a MATLAB expression. Maximization is achieved by supplying the routines with -f
, where f
is the function being optimized.
Optimization options passed to the routines change optimization parameters. Default optimization parameters are used extensively but can be changed through an options
structure.
Gradients are calculated using an adaptive finite-difference method unless they are supplied in a function. Parameters can be passed directly to functions, avoiding the need for global variables.
This guide separates "medium-scale" algorithms from "large-scale" algorithms. Medium-scale is not a standard term and is used here only to differentiate these algorithms from the large-scale algorithms, which are designed to handle large-scale problems efficiently.
Medium-Scale Algorithms
The Optimization Toolbox routines offer a choice of algorithms and line search strategies. The principal algorithms for unconstrained minimization are the Nelder-Mead simplex search method and the BFGS (Broyden, Fletcher, Goldfarb, and Shanno) quasi-Newton method. For constrained minimization, minimax, goal attainment, and semi-infinite optimization, variations of sequential quadratic programming (SQP) are used. Nonlinear least-squares problems use the Gauss-Newton and Levenberg-Marquardt methods. Nonlinear equation solving also uses the trust-region dogleg algorithm.
A choice of line search strategy is given for unconstrained minimization and nonlinear least-squares problems. The line search strategies use safeguarded cubic and quadratic interpolation and extrapolation methods.
Large-Scale Algorithms
All the large-scale algorithms, except linear programming, are trust-region methods. Bound constrained problems are solved using reflective Newton methods. Equality constrained problems are solved using a projective preconditioned conjugate gradient iteration. You can use sparse iterative solvers or sparse direct solvers in solving the linear systems to determine the current step. Some choice of preconditioning in the iterative solvers is also available.
The linear programming method is a variant of Mehrotra's predictor-corrector algorithm, a primal-dual interior-point method.
![]() | Problems Covered by the Toolbox | Examples that Use Standard Algorithms | ![]() |