Optimization Toolbox
  Go to function:
    Search    Help Desk 
fmincon    Examples   See Also

Find the minimum of a constrained nonlinear multivariable function

where x, b, beq, lb, and ub are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is a function that returns a scalar. f(x), c(x), and ceq(x) can be nonlinear functions.

Syntax

Description

fmincon finds the constrained minimum of a scalar function of several variables starting at an initial estimate. This is generally referred to as constrained nonlinear optimization or nonlinear programming.

x = fmincon(fun,x0,A,b) starts at x0 and finds a minimum x to the function described in fun subject to the linear inequalities A*x <= b. x0 can be a scalar, vector, or matrix.

x = fmincon(fun,x0,A,b,Aeq,beq) minimizes fun subject to the linear equalities Aeq*x = beq as well as A*x <= b. Set A=[] and b=[] if no inequalities exist.

x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables, x, so that the solution is always in the range lb <= x <= ub. Set Aeq=[] and beq=[] if no equalities exist.

x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) subjects the minimization to the nonlinear inequalities c(x) or equalities ceq(x) defined in nonlcon. fmincon optimizes such that c(x) <= 0 and ceq(x) = 0. Set lb=[] and/or ub=[] if no bounds exist.

x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizes with the optimization parameters specified in the structure options.

x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options,P1,P2,...) passes the problem-dependent parameters P1, P2, etc., directly to the functions fun and nonlcon. Pass empty matrices as placeholders for A, b, Aeq, beq, lb, ub, nonlcon, and options if these arguments are not needed.

[x,fval] = fmincon(...) returns the value of the objective function fun at the solution x.

[x,fval,exitflag] = fmincon(...) returns a value exitflag that describes the exit condition of fmincon.

[x,fval,exitflag,output] = fmincon(...) returns a structure output with information about the optimization.

[x,fval,exitflag,output,lambda] = fmincon(...) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

[x,fval,exitflag,output,lambda,grad] = fmincon(...) returns the value of the gradient of fun at the solution x.

[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...) returns the value of the Hessian of fun at the solution x.

Arguments

The arguments passed into the function are described in Table 1-1. The arguments returned by the function are described in Table 1-2. Details relevant to fmincon are included below for fun, nonlcon, options, exitflag, lambda, and output.

fun
The function to be minimized. fun takes a vector x and returns a scalar value f of the objective function evaluated at x. You can specify fun to be an inline object. For example,

    fun = inline('sin(x''*x)');
    
Alternatively, fun can be a string containing the name of a function (an M-file, a built-in function, or a MEX-file). If fun='myfun' then the M-file function myfun.m would have the form

    function f = myfun(x)
    f = ...            % Compute function value at x
    

If the gradient of fun can also be computed and options.GradObj is 'on', as set by

    options = optimset('GradObj','on')
    
then the function fun must return, in the second output argument, the gradient value g, a vector, at x. Note that by checking the value of nargout the function can avoid computing g when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of f but not g):

    function [f,g] = myfun(x)
    f = ...         % compute the function value at x
    if nargout > 1  % fun called with two output arguments
       g = ...      % compute the gradient evaluated at x
    end
    

The gradient is the partial derivatives of f at the point x. That is, the ith component of g is the partial derivative of f with respect to the ith component of x.


If the Hessian matrix can also be computed and options.Hessian is 'on', i.e., options = optimset('Hessian','on'), then the function fun must return the Hessian value H, a symmetric matrix, at x in a third output argument. Note that by checking the value of nargout we can avoid computing H when fun is called with only one or two output arguments (in the case where the optimization algorithm only needs the values of f and g but not H):

    function [f,g,H] = myfun(x)
    f = ...     % Compute the objective function value at x
    if nargout > 1   % fun called with two output arguments
       g = ...    % gradient of the function evaluated at x
       if nargout > 2
       H = ...    % Hessian evaluated at x
    end
    
The Hessian matrix is the second partial derivatives matrix of f at the point x. That is, the (ith,jth) component of H is the second partial derivative of f with respect to xi and xj, . The Hessian is by definition a symmetric matrix.

nonlcon
The function that computes the nonlinear inequality constraints c(x)<=0 and nonlinear equality constraints ceq(x)=0. nonlcon is a string containing the name of a function (an M-file, a built-in, or a MEX-file). nonlcon takes a vector x and returns two arguments, a vector c of the nonlinear inequalities evaluated at x and a vector ceq of the nonlinear equalities evaluated at x. For example, if nonlcon='mycon' then the M-file mycon.m would have the form

    function [c,ceq] = mycon(x)
    c = ...     % Compute nonlinear inequalities at x
    ceq = ...   % Compute the nonlinear equalities at x
    

If the gradients of the constraints can also be computed and options.GradConstr is 'on', as set by

    options = optimset('GradConstr','on')
    
then the function nonlcon must also return, in the third and fourth output arguments, GC, the gradient of c(x), and GCeq, the gradient of ceq(x). Note that by checking the value of nargout the function can avoid computing GC and GCeq when nonlcon is called with only two output arguments (in the case where the optimization algorithm only needs the values of c and ceq but not GC and GCeq):


    function [c,ceq,GC,GCeq] = mycon(x)
    c = ...          % nonlinear inequalities at x
    ceq = ...        % nonlinear equalities at x
    if nargout > 2   % nonlcon called with 4 outputs
       GC = ...      % gradients of the inequalities
       GCeq = ...    % gradients of the equalities
    end
    

If nonlcon returns a vector c of m components and x has length n, then the gradient GC of c(x) is an n-by-m matrix, where GC(i,j) is the partial derivative of c(j) with respect to x(i) (i.e., the jth column of GC is the gradient of the jth inequality constraint c(j)). Likewise, if ceq has p components, the gradient GCeq of ceq(x) is an n-by-p matrix, where GCeq(i,j) is the partial derivative of ceq(j) with respect to x(i) (i.e., the jth column of GCeq is the gradient of the jth equality constraint ceq(j)).

options
Optimization parameter options. You can set or change the values of these parameters using the optimset function. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm.

We start by describing the LargeScale option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale algorithm. For fmincon, the gradient must be provided (see the description of fun above to see how) or else the medium-scale algorithm will be used.


Parameters used by both the large-scale and medium-scale algorithms:



Parameters used by the large-scale algorithm only:



Parameters used by the medium-scale algorithm only:

exitflag
Describes the exit condition:

lambda
A structure containing the Lagrange multipliers at the solution x (separated by constraint type):

output
A structure whose fields contain information about the optimization:

Examples

Find values of x that minimize , starting at the point x = [10; 10; 10] and subject to the constraints

First, write an M-file that returns a scalar value f of the function evaluated at x:

Then rewrite the constraints as both less than or equal to a constant,

Since both constraints are linear, formulate them as the matrix inequality where

Next, supply a starting point and invoke an optimization routine:

After 66 function evaluations, the solution is

where the function value is

and linear inequality constraints evaluate to be <= 0

Notes

Large-scale optimization.    To use the large-scale method, the gradient must be provided in fun (and options.GradObj set to 'on'). A warning is given if no gradient is provided and options.LargeScale is not 'off'. fmincon permits g(x) to be an approximate gradient but this option is not recommended: the numerical behavior of most optimization codes is considerably more robust when the true gradient is used.

The large-scale method in fmincon is most effective when the matrix of second derivatives, i.e., the Hessian matrix H(x), is also computed. However, evaluation of the true Hessian matrix is not required. For example, if you can supply the Hessian sparsity structure (using the HessPattern parameter in options), then fmincon will compute a sparse finite-difference approximation to H(x).

If x0 is not strictly feasible, fmincon chooses a new strictly feasible (centered) starting point.

If components of x have no upper (or lower) bounds, then fmincon prefers that the corresponding components of ub (or lb) be set to Inf (or -Inf for lb) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.

Several aspects of linearly constrained minimization should be noted:

Medium-scale optimization.    Better numerical results are likely if you specify equalities explicitly using Aeq and beq, instead of implicitly using lb and ub.

If equality constraints are present and dependent equalities are detected and removed in the quadratic subproblem, 'dependent' is printed under the Procedures heading (when output is asked for using options.Display = 'iter'). The dependent equalities are only removed when the equalities are consistent. If the system of equalities is not consistent, the subproblem is infeasible and 'infeasible' is printed under the Procedures heading.

Algorithm

Large-scale optimization.    By default fmincon will choose the large-scale algorithm if the user supplies the gradient in fun (and GradObj is 'on' in options) and if only upper and lower bounds exists or only linear equality constraints exist. This algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in [5],[6]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See the trust-region and preconditioned conjugate gradient method descriptions in the Large-Scale Algorithms chapter.

Medium-scale optimization.    fmincon uses a Sequential Quadratic Programming (SQP) method. In this method, a Quadratic Programming (QP) subproblem is solved at each iteration. An estimate of the Hessian of the Lagrangian is updated at each iteration using the BFGS formula (see fminunc, references [3, 6]).

A line search is performed using a merit function similar to that proposed by [1] and [2, 3]. The QP subproblem is solved using an active set strategy similar to that described in [4]. A full description of this algorithm is found in the "Constrained Optimization" section of the Introduction to Algorithms chapter of the toolbox manual.

See also the SQP implementation section in the Introduction to Algorithms chapter for more details on the algorithm used.

Diagnostics

Large-scale optimization.    The large-scale code will not allow equal upper and lower bounds. For example if lb(2)==ub(2), then fmincon gives the error:

If you only have equality constraints you can still use the large-scale method. But if you have both equalities and bounds, you must use the medium-scale method.

Limitations

The function to be minimized and the constraints must both be continuous. fmincon may only give local solutions.

When the problem is infeasible, fmincon attempts to minimize the maximum constraint value.

The objective function and constraint function must be real-valued, that is they cannot return complex values.

Large-scale optimization.    To use the large-scale algorithm, the user must supply the gradient in fun (and GradObj must be set 'on' in options) , and only upper and lower bounds constraints may be specified, or only linear equality constraints must exist and Aeq cannot have more rows than columns. Aeq is typically sparse. See Table 1-4 for more information on what problem formulations are covered and what information must be provided.

Currently, if the analytical gradient is provided in fun, the options parameter DerivativeCheck cannot be used with the large-scale method to compare the analytic gradient to the finite-difference gradient. Instead, use the medium-scale method to check the derivative with options parameter MaxIter set to 0 iterations. Then run the problem with the large-scale method.

References

[1] Han, S.P., "A Globally Convergent Method for Nonlinear Programming," Journal of Optimization Theory and Applications, Vol. 22, p. 297, 1977.

[2] Powell, M.J.D., "The Convergence of Variable Metric Methods For Nonlinearly Constrained Optimization Calculations," Nonlinear Programming 3, (O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds.) Academic Press, 1978.

[3] Powell, M.J.D., "A Fast Algorithm for Nonlineary Constrained Optimization Calculations," Numerical Analysis, ed. G.A. Watson, Lecture Notes in Mathematics, Springer Verlag, Vol. 630, 1978.

[4] Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, 1981.

[5] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.

[6] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.

See Also

fminbnd, fminsearch, fminunc, optimset



[ Previous | Help Desk | Next ]