gradient of the objective function; will be provided provided
is NULL and the solver requires derivatives.
lower, upper
lower and upper bound constraints.
hin, hinjac
defines the inequalty constraints, hin(x) >= 0
heq, heqjac
defines the equality constraints, heq(x) = 0.
localsolver
available local solvers: COBYLA, LBFGS, MMA, or SLSQP.
localtol
tolerance applied in the selected local solver.
ineq2local
logical; shall the inequality constraints be treated
by the local solver?; not possible at the moment.
nl.info
logical; shall the original NLopt info been shown.
control
list of options, see nl.opts for help.
...
additional arguments passed to the function.
Details
This method combines the objective function and the nonlinear
inequality/equality constraints (if any) in to a single function:
essentially, the objective plus a ‘penalty’ for any violated constraints.
This modified objective function is then passed to another optimization
algorithm with no nonlinear constraints. If the constraints are violated
by the solution of this sub-problem, then the size of the penalties is
increased and the process is repeated; eventually, the process must
converge to the desired solution (if it exists).
Since all of the actual optimization is performed in this subsidiary
optimizer, the subsidiary algorithm that you specify determines whether
the optimization is gradient-based or derivative-free.
The local solvers available at the moment are “COBYLA” (for the
derivative-free approach) and “LBFGS”, “MMA”, or “SLSQP” (for
smooth functions). The tolerance for the local solver has to be provided.
There is a variant that only uses penalty functions for equality constraints
while inequality constraints are passed through to the subsidiary algorithm
to be handled directly; in this case, the subsidiary algorithm must handle
inequality constraints.
(At the moment, this variant has been turned off because of problems with
the NLOPT library.)
Value
List with components:
par
the optimal solution found so far.
value
the function value corresponding to par.
iter
number of (outer) iterations, see maxeval.
global_solver
the global NLOPT solver used.
local_solver
the local NLOPT solver used, LBFGS or COBYLA.
convergence
integer code indicating successful completion (> 0)
or a possible error number (< 0).
message
character string produced by NLopt and giving additional
information.
Note
Birgin and Martinez provide their own free implementation of the method as
part of the TANGO project; other implementations can be found in semi-free
packages like LANCELOT.
References
Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint, “A globally
convergent augmented Lagrangian algorithm for optimization with general
constraints and simple bounds,”
SIAM J. Numer. Anal. vol. 28, no. 2, p. 545-572 (1991).
E. G. Birgin and J. M. Martinez, “Improving ultimate convergence of an
augmented Lagrangian method,"
Optimization Methods and Software vol. 23, no. 2, p. 177-195 (2008).