|Optimization Toolbox||Search  Help Desk|
|fmincon||Examples See Also|
Find the minimum of a constrained nonlinear multivariable function
x = fmincon(fun,x0,A,b) x = fmincon(fun,x0,A,b,Aeq,beq) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options,P1,P2, ...) [x,fval] = fmincon(...) [x,fval,exitflag] = fmincon(...) [x,fval,exitflag,output] = fmincon(...) [x,fval,exitflag,output,lambda] = fmincon(...) [x,fval,exitflag,output,lambda,grad] = fmincon(...) [x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...)
fminconfinds the constrained minimum of a scalar function of several variables starting at an initial estimate. This is generally referred to as constrained nonlinear optimization or nonlinear programming.
x = fmincon(fun,x0,A,b
x0and finds a minimum
xto the function described in
funsubject to the linear inequalities
A*x <= b.
x0can be a scalar, vector, or matrix.
x = fmincon(fun,x0,A,b,Aeq,beq)
funsubject to the linear equalities
Aeq*x = beqas well as
A*x <= b. Set
b=if no inequalities exist.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)defines a set of lower and upper bounds on the design variables,
x, so that the solution is always in the range
lb <= x <= ub. Set
beq=if no equalities exist.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)subjects the minimization to the nonlinear inequalities
fminconoptimizes such that
c(x) <= 0and
ceq(x) = 0.Set
ub=if no bounds exist.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)minimizes with the optimization parameters specified in the structure
passes the problem-dependent parameters
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options
P2, etc., directly to the functions
nonlcon. Pass empty matrices as placeholders for
optionsif these arguments are not needed.
[x,fval] = fmincon(...)returns the value of the objective function
funat the solution
[x,fval,exitflag] = fmincon(...)returns a value
exitflagthat describes the exit condition of
[x,fval,exitflag,output] = fmincon(...)returns a structure
outputwith information about the optimization.
[x,fval,exitflag,output,lambda] = fmincon(...)returns a structure
lambdawhose fields contain the Lagrange multipliers at the solution
[x,fval,exitflag,output,lambda,grad] = fmincon(...)returns the value of the gradient of
funat the solution
[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...)returns the value of the Hessian of
funat the solution
ArgumentsThe arguments passed into the function are described in Table 1-1. The arguments returned by the function are described in Table 1-2. Details relevant to
fminconare included below for
||The function to be minimized.
|If the gradient of
|The gradient is the partial derivatives of
|If the Hessian matrix can also be computed and
function [f,g,H] = myfun(x) f = ... % Compute the objective function value at x if nargout > 1 % fun called with two output arguments g = ... % gradient of the function evaluated at x if nargout > 2 H = ... % Hessian evaluated at x endThe Hessian matrix is the second partial derivatives matrix of
||The function that computes the nonlinear inequality constraints
|If the gradients of the constraints can also be computed and
||Optimization parameter options. You can set or change the values of these parameters using the
|Parameters used by both the large-scale and medium-scale algorithms:
|Parameters used by the large-scale algorithm only:
|Parameters used by the medium-scale algorithm only:
||Describes the exit condition:|
||A structure containing the Lagrange multipliers at the solution
||A structure whose fields contain information about the optimization:
ExamplesFind values of x that minimize , starting at the point
x = [10; 10; 10]and subject to the constraints
fof the function evaluated at
function f = myfun(x) f = -x(1) * x(2) * x(3);Then rewrite the constraints as both less than or equal to a constant,
x0 = [10; 10; 10]; % Starting guess at the solution [x,fval] = fmincon('myfun',x0,A,b)After 66 function evaluations, the solution is
x = 24.0000 12.0000 12.0000where the function value is
fval = -3.4560e+03and linear inequality constraints evaluate to be
A*x-b= -72 0
NotesLarge-scale optimization. To use the large-scale method, the gradient must be provided in
'on'). A warning is given if no gradient is provided and
fminconpermits g(x) to be an approximate gradient but this option is not recommended: the numerical behavior of most optimization codes is considerably more robust when the true gradient is used. The large-scale method in
fminconis most effective when the matrix of second derivatives, i.e., the Hessian matrix H(x), is also computed. However, evaluation of the true Hessian matrix is not required. For example, if you can supply the Hessian sparsity structure (using the
fminconwill compute a sparse finite-difference approximation to H(x). If
x0is not strictly feasible,
fminconchooses a new strictly feasible (centered) starting point. If components of x have no upper (or lower) bounds, then
fminconprefers that the corresponding components of
lb)be set to
lb)as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number. Several aspects of linearly constrained minimization should be noted:
Aeqcan result in considerable fill and computational cost.
fminconremoves (numerically) linearly dependent rows in
Aeq; however, this process involves repeated matrix factorizations and therefore can be costly if there are many dependencies.
RT is the Cholesky factor of the preconditioner. Therefore, there is a potential conflict between choosing an effective preconditioner and minimizing fill in .
beq,instead of implicitly using
ub. If equality constraints are present and dependent equalities are detected and removed in the quadratic subproblem,
'dependent'is printed under the
Proceduresheading (when output is asked for using
options.Display = 'iter'). The dependent equalities are only removed when the equalities are consistent. If the system of equalities is not consistent, the subproblem is infeasible and
'infeasible'is printed under the
AlgorithmLarge-scale optimization. By default
fminconwill choose the large-scale algorithm if the user supplies the gradient in
options) and if only upper and lower bounds exists or only linear equality constraints exist. This algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in ,. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See the trust-region and preconditioned conjugate gradient method descriptions in the Large-Scale Algorithms chapter. Medium-scale optimization.
fminconuses a Sequential Quadratic Programming (SQP) method. In this method, a Quadratic Programming (QP) subproblem is solved at each iteration. An estimate of the Hessian of the Lagrangian is updated at each iteration using the BFGS formula (see
fminunc, references [3, 6]). A line search is performed using a merit function similar to that proposed by  and [2, 3]. The QP subproblem is solved using an active set strategy similar to that described in . A full description of this algorithm is found in the "Constrained Optimization" section of the Introduction to Algorithms chapter of the toolbox manual. See also the SQP implementation section in the Introduction to Algorithms chapter for more details on the algorithm used.
DiagnosticsLarge-scale optimization. The large-scale code will not allow equal upper and lower bounds. For example if
fmincongives the error:
Equal upper and lower bounds not permitted in this large-scale method. Use equality constraints and the medium-scale method instead.If you only have equality constraints you can still use the large-scale method. But if you have both equalities and bounds, you must use the medium-scale method.
LimitationsThe function to be minimized and the constraints must both be continuous.
fminconmay only give local solutions. When the problem is infeasible,
fminconattempts to minimize the maximum constraint value. The objective function and constraint function must be real-valued, that is they cannot return complex values. Large-scale optimization. To use the large-scale algorithm, the user must supply the gradient in
GradObjmust be set
options) , and only upper and lower bounds constraints may be specified, or only linear equality constraints must exist and
Aeqcannot have more rows than columns.
Aeqis typically sparse. See Table 1-4 for more information on what problem formulations are covered and what information must be provided. Currently, if the analytical gradient is provided in
DerivativeCheckcannot be used with the large-scale method to compare the analytic gradient to the finite-difference gradient. Instead, use the medium-scale method to check the derivative with
MaxIterset to 0 iterations. Then run the problem with the large-scale method.
References Han, S.P., "A Globally Convergent Method for Nonlinear Programming," Journal of Optimization Theory and Applications, Vol. 22, p. 297, 1977.  Powell, M.J.D., "The Convergence of Variable Metric Methods For Nonlinearly Constrained Optimization Calculations," Nonlinear Programming 3, (O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds.) Academic Press, 1978.  Powell, M.J.D., "A Fast Algorithm for Nonlineary Constrained Optimization Calculations," Numerical Analysis, ed. G.A. Watson, Lecture Notes in Mathematics, Springer Verlag, Vol. 630, 1978.  Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, 1981.  Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.  Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.