Optimization Toolbox    

Constrained Example with Gradients

Ordinarily the medium-scale minimization routines use numerical gradients calculated by finite-difference approximation. This procedure systematically perturbs each of the variables in order to calculate function and constraint partial derivatives. Alternatively, you can provide a function to compute partial derivatives analytically. Typically, the problem is solved more accurately and efficiently if such a function is provided.

To solve Eq. 2-2 using analytically determined gradients, do the following.

Step 1: Write an M-file for the objective function and gradient.

Step 2: Write an M-file for the nonlinear constraints and the gradients of the nonlinear constraints.

G contains the partial derivatives of the objective function, f, returned by objfungrad(x), with respect to each of the elements in x:

     (2-4)  

The columns of DC contain the partial derivatives for each respective constraint (i.e., the ith column of DC is the partial derivative of the ith constraint with respect to x). So in the above example, DC is

     (2-5)  

Since you are providing the gradient of the objective in objfungrad.m and the gradient of the constraints in confungrad.m, you must tell fmincon that these M-files contain this additional information. Use optimset to turn the parameters GradObj and GradConstr to 'on' in the example's existing options structure:

If you do not set these parameters to 'on' in the options structure, fmincon does not use the analytic gradients.

The arguments lb and ub place lower and upper bounds on the independent variables in x. In this example, there are no bound constraints and so they are both set to [].

Step 3: Invoke constrained optimization routine.

After 20 function evaluations, the solution produced is


  Constrained Example with Bounds Gradient Check: Analytic Versus Numeric