Initial values for the parameters to be optimized over: z=(x, lambda, mu).
dimx
a vector of dimension for x.
obj
objective function (to be minimized), see details.
argobj
a list of additional arguments.
grobj
gradient of the objective function, see details.
arggrobj
a list of additional arguments of the objective gradient.
heobj
Hessian of the objective function, see details.
argheobj
a list of additional arguments of the objective Hessian.
joint
joint function (h(x)<=0), see details.
argjoint
a list of additional arguments of the joint function.
jacjoint
Jacobian of the joint function, see details.
argjacjoint
a list of additional arguments of the Jacobian.
method
either "pure", "UR", "vH", "RRE", "MPE",
"SqRRE" or "SqMPE" method, see details. "default"
corresponds to "MPE".
problem
either "NIR", "VIP", see details.
merit
either "NI", "VI", "FP", see details.
order.method
the order of the extrapolation method.
control.outer
a list with control parameters for the fixed point algorithm.
control.inner
a list with control parameters for the fixed point function.
silent
a logical to show some traces.
param
a list of parameters for the computation of the fixed point function.
stepfunc
the step function, only needed when method="UR".
argstep
additional arguments for the step function.
...
further arguments to be passed to the optimization routine.
NOT to the functions.
Details
Functions in argument must respect the following template:
obj must have arguments the current iterate z, the player number i
and optionnally additional arguments given in a list.
grobj must have arguments the current iterate z, the player number i,
the derivative index j and optionnally additional arguments given in a list.
heobj must have arguments the current iterate z, the player number i,
the derivative indexes j, k and optionnally additional arguments given in a list.
joint must have arguments the current iterate z
and optionnally additional arguments given in a list.
jacjoint must have arguments the current iterate z,
the derivative index j and optionnally additional arguments given in a list.
The fixed point approach consists in solving equation y(x)=x.
(a) Crude or pure fixed point method:
It simply consists in iterations x_{n+1} = y(x_n).
(b) Polynomial methods:
- relaxation algorithm (linear extrapolation):
The next iterate is computed as
x_{n+1} = (1-α_n) x_n + α_n y(x_n).
The step α_n can be computed in different ways: constant, decreasing
serie or a line search method. In the literature of game theory, the decreasing serie
refers to the method of Ursayev and Rubinstein (method="UR") while the line search
method refers to the method of von Heusinger (method="vH"). Note that the constant
step can be done using the UR method.
- RRE and MPE method:
Reduced Rank Extrapolation and Minimal Polynomial Extrapolation
methods are polynomial extrapolation methods, where the monomials are functional
“powers” of the y function, i.e. function composition of y. Of order 1, RRE and MPE
consists of
x_{n+1} = x_n + t_n (y(x_n) - x_n),
where t_n equals to
<v_n, r_n> / <v_n, v_n> for RRE1 and <r_n, r_n> / <v_n, r_n> for MPE1, where
r_n =y(x_n) - x_n and v_n = y(y(x_n)) - 2y(x_n) + x_n.
To use RRE/MPE methods, set method = "RRE" or method = "MPE".
- squaring method:
It consists in using an extrapolation method (such as RRE and MPE)
after two iteration of the linear extrapolation, i.e.
x_{n+1} = x_n -2 t_n r_n + t_n^2 v_n.
The squared version of RRE/MPE methods are
available via setting method = "SqRRE" or method = "SqMPE".
(c) Epsilon algorithms:
Not implemented.
For details on fixed point methods, see Varadhan & Roland (2004).
The control.outer argument is a list that can supply any of the following components:
merit="FP" and method="pure"
see fpiter.
the default parameters are list(tol=1e-6, maxiter=100, trace=TRUE).
merit="FP" and method!="pure"
see squarem.
the default parameters are list(tol=1e-6, maxiter=100, trace=TRUE).
merit!="FP"
parameters are
tol
The absolute convergence tolerance. Default to 1e-6.
maxit
The maximum number of iterations. Default to 100.
echo
A logical or an integer (0, 1, 2, 3) to print traces.
Default to FALSE, i.e. 0.
sigma, beta
parameters for von Heusinger algorithm.
Default to 9/10 and 1/2 respectively.
Value
A list with components:
par
The best set of parameters found.
value
The value of the merit function.
outer.counts
A two-element integer vector giving the number of
calls to fixed-point and merit functions respectively.
outer.iter
The outer iteration number.
code
The values returned are
1
Function criterion is near zero.
Convergence of function values has been achieved.
4
Iteration limit maxit exceeded.
100
an error in the execution.
inner.iter
The iteration number when
computing the fixed-point function.
inner.counts
A two-element integer
vector giving the number of calls to the gap function and its gradient
when computing the fixed-point function.
message
a string describing the termination code
Author(s)
Christophe Dutang
References
A. von Heusinger (2009),
Numerical Methods for the Solution of the Generalized Nash Equilibrium Problem,
Ph. D. Thesis.
A. von Heusinger and C. Kanzow (2009),
Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions,
Comput Optim Appl .
S. Uryasev and R.Y. Rubinstein (1994),
On relaxation algorithms in computation of noncooperative equilibria,
IEEE Transactions on Automatic Control.
R. Varadhan and C. Roland (2004),
Squared Extrapolation Methods (SQUAREM): A New Class of Simple and Efficient Numerical
Schemes for Accelerating the Convergence of the EM Algorithm,
Johns Hopkins University, Dept. of Biostatistics Working Papers.
See Also
See GNE.ceq, GNE.minpb and GNE.nseq
for other approaches.