Last data update: 2014.03.03

R: An R implementation of a nonlinear conjugate gradient...
RcgminR Documentation

An R implementation of a nonlinear conjugate gradient algorithm with the Dai / Yuan update and restart. Based on Nash (1979) Algorithm 22 for its main structure.

Description

The purpose of Rcgmin is to minimize an unconstrained or bounds (box) and mask constrained function of many parameters by a nonlinear conjugate gradients method. This code is entirely in R to allow users to explore and understand the method. It also allows bounds (or box) constraints and masks (equality constraints) to be imposed on parameters.

Rcgmin is a wrapper that calls Rcgminu for unconstrained problems, else Rcgminb.

Usage

   Rcgmin(par, fn, gr, lower, upper, bdmsk, control = list(), ...)

Arguments

par

A numeric vector of starting estimates.

fn

A function that returns the value of the objective at the supplied set of parameters par using auxiliary data in .... The first argument of fn must be par.

gr

A function that returns the gradient of the objective at the supplied set of parameters par using auxiliary data in .... The first argument of fn must be par. This function returns the gradient as a numeric vector.

If gr is not provided or is NULL, then grad() from package numDeriv is used. However, we strongly recommend carefully coded and checked analytic derivatives for Rcgmin.

lower

A vector of lower bounds on the parameters.

upper

A vector of upper bounds on the parameters.

bdmsk

An indicator vector, having 1 for each parameter that is "free" or unconstrained, and 0 for any parameter that is fixed or MASKED for the duration of the optimization.

control

An optional list of control settings.

...

Further arguments to be passed to fn.

Details

Functions fn must return a numeric value.

The control argument is a list.

maxit

A limit on the number of iterations (default 500). Note that this is used to compute a quantity maxfeval<-round(sqrt(n+1)*maxit) where n is the number of parameters to be minimized.

trace

Set 0 (default) for no output, >0 for trace output (larger values imply more output).

eps

Tolerance used to calculate numerical gradients. Default is 1.0E-7. See source code for Rcgmin for details of application.

dowarn

= TRUE if we want warnings generated by optimx. Default is TRUE.

tol

Tolerance used in testing the size of the square of the gradient. Default is 0 on input, which uses a value of tolgr = npar*npar*.Machine$double.eps in testing if crossprod(g) <= tolgr * (abs(fmin) + reltest). If the user supplies a value for tol that is non-zero, then that value is used for tolgr.

reltest=100 is only alterable by changing the code. fmin is the current best value found for the function minimum value.

Note that the scale of the gradient means that tests for a small gradient can easily be mismatched to a given problem. The defaults in Rcgmin are a "best guess".

The source code Rcgmin for R is likely to remain a work in progress for some time, so users should watch the console output.

As of 2011-11-21 the following controls have been REMOVED

usenumDeriv

There is now a choice of numerical gradient routines. See argument gr.

maximize

To maximize user_function, supply a function that computes (-1)*user_function. An alternative is to call Rcgmin via the package optimx, where the MAXIMIZE field of the OPCON structure in package optfntools is used.

Value

A list with components:

par

The best set of parameters found.

value

The value of the objective at the best set of parameters found.

counts

A two-element integer vector giving the number of calls to 'fn' and 'gr' respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to 'fn' to compute a finite-difference approximation to the gradient.

convergence

An integer code. '0' indicates successful convergence. '1' indicates that the function evaluation count 'maxfeval' was reached. '2' indicates initial point is infeasible.

message

A character string giving any additional information returned by the optimizer, or 'NULL'.

bdmsk

Returned index describing the status of bounds and masks at the proposed solution. Parameters for which bdmsk are 1 are unconstrained or "free", those with bdmsk 0 are masked i.e., fixed. For historical reasons, we indicate a parameter is at a lower bound using -3 or upper bound using -1.

References

Dai, Y. H. and Y. Yuan (2001). An efficient hybrid conjugate gradient method for unconstrained optimization. Annals of Operations Research 103 (1-4), 33–47.

Nash JC (1979). Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation. Adam Hilger, Bristol. Second Edition, 1990, Bristol: Institute of Physics Publications.

Nash, J. C. and M. Walker-Smith (1987). Nonlinear Parameter Estimation: An Integrated System in BASIC. New York: Marcel Dekker. See http://www.nashinfo.com/nlpe.htm for a downloadable version of this plus some extras.

See Also

optim

Examples

#####################
require(numDeriv)
## Rosenbrock Banana function
fr <- function(x) {
    x1 <- x[1]
    x2 <- x[2]
    100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}

grr <- function(x) { ## Gradient of 'fr'
    x1 <- x[1]
    x2 <- x[2]
    c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
       200 *      (x2 - x1 * x1))
}

grn<-function(x){
    gg<-grad(fr, x)
}  


ansrosenbrock0 <- Rcgmin(fn=fr,gr=grn, par=c(1,2))
print(ansrosenbrock0) # use print to allow copy to separate file that 
#    can be called using source()
#####################
# Simple bounds and masks test
bt.f<-function(x){
 sum(x*x)
}

bt.g<-function(x){
  gg<-2.0*x
}

n<-10
xx<-rep(0,n)
lower<-rep(0,n)
upper<-lower # to get arrays set
bdmsk<-rep(1,n)
bdmsk[(trunc(n/2)+1)]<-0
for (i in 1:n) { 
   lower[i]<-1.0*(i-1)*(n-1)/n
   upper[i]<-1.0*i*(n+1)/n
}
xx<-0.5*(lower+upper)
ansbt<-Rcgmin(xx, bt.f, bt.g, lower, upper, bdmsk, control=list(trace=1))

print(ansbt)

#####################
genrose.f<- function(x, gs=NULL){ # objective function
## One generalization of the Rosenbrock banana valley function (n parameters)
	n <- length(x)
        if(is.null(gs)) { gs=100.0 }
	fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
        return(fval)
}
genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f
# Ravi Varadhan 2009-04-03
	n <- length(x)
        if(is.null(gs)) { gs=100.0 }
	gg <- as.vector(rep(0, n))
	tn <- 2:n
	tn1 <- tn - 1
	z1 <- x[tn] - x[tn1]^2
	z2 <- 1 - x[tn]
	gg[tn] <- 2 * (gs * z1 - z2)
	gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
	gg
}

# analytic gradient test
xx<-rep(pi,10)
lower<-NULL
upper<-NULL
bdmsk<-NULL
genrosea<-Rcgmin(xx,genrose.f, genrose.g, gs=10)
genrosenn<-Rcgmin(xx,genrose.f, gs=10) # use local numerical gradient
cat("genrosea uses analytic gradient\n")
print(genrosea)
cat("genrosenn uses numDeriv gradient\n")
print(genrosenn)



cat("timings B vs U\n")
lo<-rep(-100,10)
up<-rep(100,10)
bdmsk<-rep(1,10)
tb<-system.time(ab<-Rcgminb(xx,genrose.f, genrose.g, lower=lo, upper=up, bdmsk=bdmsk))[1]
tu<-system.time(au<-Rcgminu(xx,genrose.f, genrose.g))[1]
cat("times U=",tu,"   B=",tb,"\n")
cat("solution Rcgminu\n")
print(au)
cat("solution Rcgminb\n")
print(ab)
cat("diff fu-fb=",au$value-ab$value,"\n")
cat("max abs parameter diff = ", max(abs(au$par-ab$par)),"\n")



maxfn<-function(x) {
      	n<-length(x)
	ss<-seq(1,n)
	f<-10-(crossprod(x-ss))^2
	f<-as.numeric(f)
	return(f)
}

gmaxfn<-function(x) {
     gg<-grad(maxfn, x) 
}


negmaxfn<-function(x) {
	f<-(-1)*maxfn(x)
	return(f)
}



cat("test that maximize=TRUE works correctly\n")

n<-6
xx<-rep(1,n)
ansmax<-Rcgmin(xx,maxfn, control=list(maximize=TRUE,trace=1))
print(ansmax)

cat("using the negmax function should give same parameters\n")
ansnegmax<-Rcgmin(xx,negmaxfn, control=list(trace=1))
print(ansnegmax)


#####################  From Rvmmin.Rd
cat("test bounds and masks\n")
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
up<-rep(10,nn)
grbds1<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo,upper=up) 
print(grbds1)

cat("test lower bound only\n")
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
grbds2<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo) 
print(grbds2)

cat("test lower bound single value only\n")
nn<-4
startx<-rep(pi,nn)
lo<-2
up<-rep(10,nn)
grbds3<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo) 
print(grbds3)

cat("test upper bound only\n")
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
up<-rep(10,nn)
grbds4<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=up) 
print(grbds4)

cat("test upper bound single value only\n")
nn<-4
startx<-rep(pi,nn)
grbds5<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=10) 
print(grbds5)



cat("test masks only\n")
nn<-6
bd<-c(1,1,0,0,1,1)
startx<-rep(pi,nn)
grbds6<-Rcgmin(startx,genrose.f, gr=genrose.g,bdmsk=bd) 
print(grbds6)

cat("test upper bound on first two elements only\n")
nn<-4
startx<-rep(pi,nn)
upper<-c(10,8, Inf, Inf)
grbds7<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=upper) 
print(grbds7)


cat("test lower bound on first two elements only\n")
nn<-4
startx<-rep(0,nn)
lower<-c(0,1.1, -Inf, -Inf)
grbds8<-Rcgmin(startx,genrose.f,genrose.g,lower=lower, control=list(maxit=2000)) 
print(grbds8)

cat("test n=1 problem using simple squares of parameter\n")

sqtst<-function(xx) {
   res<-sum((xx-2)*(xx-2))
}

gsqtst<-function(xx) {
    gg<-2*(xx-2)
}

######### One dimension test
nn<-1
startx<-rep(0,nn)
onepar<-Rcgmin(startx,sqtst,  gr=gsqtst,control=list(trace=1)) 
print(onepar)

cat("Suppress warnings\n")
oneparnw<-Rcgmin(startx,sqtst,  gr=gsqtst,control=list(dowarn=FALSE,trace=1)) 
print(oneparnw)

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(Rcgmin)
> png(filename="/home/ddbj/snapshot/RGM3/R_CC/result/Rcgmin/Rcgmin.Rd_%03d_medium.png", width=480, height=480)
> ### Name: Rcgmin
> ### Title: An R implementation of a nonlinear conjugate gradient algorithm
> ###   with the Dai / Yuan update and restart. Based on Nash (1979)
> ###   Algorithm 22 for its main structure.
> ### Aliases: Rcgmin
> ### Keywords: nonlinear optimize
> 
> ### ** Examples
> 
> #####################
> require(numDeriv)
Loading required package: numDeriv
> ## Rosenbrock Banana function
> fr <- function(x) {
+     x1 <- x[1]
+     x2 <- x[2]
+     100 * (x2 - x1 * x1)^2 + (1 - x1)^2
+ }
> 
> grr <- function(x) { ## Gradient of 'fr'
+     x1 <- x[1]
+     x2 <- x[2]
+     c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
+        200 *      (x2 - x1 * x1))
+ }
> 
> grn<-function(x){
+     gg<-grad(fr, x)
+ }  
> 
> 
> ansrosenbrock0 <- Rcgmin(fn=fr,gr=grn, par=c(1,2))
> print(ansrosenbrock0) # use print to allow copy to separate file that 
$par
[1] 0.9999999 0.9999999

$value
[1] 2.769504e-15

$counts
[1] 68 30

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> #    can be called using source()
> #####################
> # Simple bounds and masks test
> bt.f<-function(x){
+  sum(x*x)
+ }
> 
> bt.g<-function(x){
+   gg<-2.0*x
+ }
> 
> n<-10
> xx<-rep(0,n)
> lower<-rep(0,n)
> upper<-lower # to get arrays set
> bdmsk<-rep(1,n)
> bdmsk[(trunc(n/2)+1)]<-0
> for (i in 1:n) { 
+    lower[i]<-1.0*(i-1)*(n-1)/n
+    upper[i]<-1.0*i*(n+1)/n
+ }
> xx<-0.5*(lower+upper)
> ansbt<-Rcgmin(xx, bt.f, bt.g, lower, upper, bdmsk, control=list(trace=1))
Rcgmin -- J C Nash 2009 - bounds constraint version of new CG
an R implementation of Alg 22 with Yuan/Dai modification
Initial function value= 337.525 
Initial fn= 337.525 
1   0   1   337.525   last decrease= NA 
3   1   2   251.455   last decrease= 86.06996 
Yuan/Dai cycle reset
3   2   1   251.455   last decrease= NA 
5   3   2   249.2466   last decrease= 2.208412 
Yuan/Dai cycle reset
5   4   1   249.2466   last decrease= NA 
7   5   2   247.4157   last decrease= 1.830923 
Yuan/Dai cycle reset
7   6   1   247.4157   last decrease= NA 
9   7   2   245.9974   last decrease= 1.41828 
Yuan/Dai cycle reset
9   8   1   245.9974   last decrease= NA 
11   9   2   243.7158   last decrease= 2.281617 
Yuan/Dai cycle reset
11   10   1   243.7158   last decrease= NA 
13   11   2   242.6786   last decrease= 1.037168 
Yuan/Dai cycle reset
13   12   1   242.6786   last decrease= NA 
15   13   2   241.9403   last decrease= 0.7383196 
Yuan/Dai cycle reset
15   14   1   241.9403   last decrease= NA 
17   15   2   241.5045   last decrease= 0.4358326 
Yuan/Dai cycle reset
17   16   1   241.5045   last decrease= NA 
19   17   2   241.4025   last decrease= 0.1019875 
Very small gradient -- gradsqr = 0 
Rcgmin seems to have converged 
> 
> print(ansbt)
$par
 [1] 0.00 0.90 1.80 2.70 3.60 5.55 5.40 6.30 7.20 8.10

$value
[1] 241.4025

$counts
[1] 19 18

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
 [1]  1 -3 -3 -3 -3  0 -3 -3 -3 -3

> 
> #####################
> genrose.f<- function(x, gs=NULL){ # objective function
+ ## One generalization of the Rosenbrock banana valley function (n parameters)
+ 	n <- length(x)
+         if(is.null(gs)) { gs=100.0 }
+ 	fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
+         return(fval)
+ }
> genrose.g <- function(x, gs=NULL){
+ # vectorized gradient for genrose.f
+ # Ravi Varadhan 2009-04-03
+ 	n <- length(x)
+         if(is.null(gs)) { gs=100.0 }
+ 	gg <- as.vector(rep(0, n))
+ 	tn <- 2:n
+ 	tn1 <- tn - 1
+ 	z1 <- x[tn] - x[tn1]^2
+ 	z2 <- 1 - x[tn]
+ 	gg[tn] <- 2 * (gs * z1 - z2)
+ 	gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
+ 	gg
+ }
> 
> # analytic gradient test
> xx<-rep(pi,10)
> lower<-NULL
> upper<-NULL
> bdmsk<-NULL
> genrosea<-Rcgmin(xx,genrose.f, genrose.g, gs=10)
> genrosenn<-Rcgmin(xx,genrose.f, gs=10) # use local numerical gradient
Warning message:
In Rcgminu(par, fn, gr, control = control, ...) :
  A NULL gradient function is being replaced by numDeriv 'grad()'for Rcgmin
> cat("genrosea uses analytic gradient\n")
genrosea uses analytic gradient
> print(genrosea)
$par
 [1] 1 1 1 1 1 1 1 1 1 1

$value
[1] 1

$counts
[1] 87 39

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> cat("genrosenn uses numDeriv gradient\n")
genrosenn uses numDeriv gradient
> print(genrosenn)
$par
 [1] 1 1 1 1 1 1 1 1 1 1

$value
[1] 1

$counts
[1] 87 39

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> 
> 
> 
> cat("timings B vs U\n")
timings B vs U
> lo<-rep(-100,10)
> up<-rep(100,10)
> bdmsk<-rep(1,10)
> tb<-system.time(ab<-Rcgminb(xx,genrose.f, genrose.g, lower=lo, upper=up, bdmsk=bdmsk))[1]
> tu<-system.time(au<-Rcgminu(xx,genrose.f, genrose.g))[1]
> cat("times U=",tu,"   B=",tb,"\n")
times U= 0.004    B= 0.012 
> cat("solution Rcgminu\n")
solution Rcgminu
> print(au)
$par
 [1] 1 1 1 1 1 1 1 1 1 1

$value
[1] 1

$counts
[1] 146  69

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> cat("solution Rcgminb\n")
solution Rcgminb
> print(ab)
$par
 [1] 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
 [8] 1.0000000 1.0000000 0.9999999

$value
[1] 1

$counts
[1] 120  58

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
 [1] 1 1 1 1 1 1 1 1 1 1

> cat("diff fu-fb=",au$value-ab$value,"\n")
diff fu-fb= -1.110223e-14 
> cat("max abs parameter diff = ", max(abs(au$par-ab$par)),"\n")
max abs parameter diff =  8.590758e-08 
> 
> 
> 
> maxfn<-function(x) {
+       	n<-length(x)
+ 	ss<-seq(1,n)
+ 	f<-10-(crossprod(x-ss))^2
+ 	f<-as.numeric(f)
+ 	return(f)
+ }
> 
> gmaxfn<-function(x) {
+      gg<-grad(maxfn, x) 
+ }
> 
> 
> negmaxfn<-function(x) {
+ 	f<-(-1)*maxfn(x)
+ 	return(f)
+ }
> 
> 
> 
> cat("test that maximize=TRUE works correctly\n")
test that maximize=TRUE works correctly
> 
> n<-6
> xx<-rep(1,n)
> ansmax<-Rcgmin(xx,maxfn, control=list(maximize=TRUE,trace=1))
Warning message:
In Rcgminu(par, fn, gr, control = control, ...) :
  Rcgmin no longer supports maximize 111121 -- see documentation
> print(ansmax)
[[1]]
[1] 1 1 1 1 1 1

[[2]]
[1] NA

[[3]]
[1] 0 0

[[4]]
[1] 9999

[[5]]
[1] "Rcgmin no longer supports maximize 111121"

> 
> cat("using the negmax function should give same parameters\n")
using the negmax function should give same parameters
> ansnegmax<-Rcgmin(xx,negmaxfn, control=list(trace=1))
Rcgminu -- J C Nash 2009 - unconstrained version CG min
an R implementation of Alg 22 with Yuan/Dai modification
Initial function value= 3015 
Initial fn= 3015 
1   0   1   3015   last decrease= NA 
***6   1   2   -9.572997   last decrease= 3024.573 
Yuan/Dai cycle reset
6   2   1   -9.572997   last decrease= NA 
8   3   2   -9.917437   last decrease= 0.3444404 
Yuan/Dai cycle reset
8   4   1   -9.917437   last decrease= NA 
10   5   2   -9.988384   last decrease= 0.07094708 
Yuan/Dai cycle reset
10   6   1   -9.988384   last decrease= NA 
12   7   2   -9.998354   last decrease= 0.009969833 
Yuan/Dai cycle reset
12   8   1   -9.998354   last decrease= NA 
14   9   2   -9.999767   last decrease= 0.001412908 
Yuan/Dai cycle reset
14   10   1   -9.999767   last decrease= NA 
16   11   2   -9.99996   last decrease= 0.0001927171 
Yuan/Dai cycle reset
16   12   1   -9.99996   last decrease= NA 
18   13   2   -9.999992   last decrease= 3.289794e-05 
Yuan/Dai cycle reset
18   14   1   -9.999992   last decrease= NA 
20   15   2   -9.999999   last decrease= 6.125639e-06 
Yuan/Dai cycle reset
20   16   1   -9.999999   last decrease= NA 
22   17   2   -10   last decrease= 1.179795e-06 
Yuan/Dai cycle reset
22   18   1   -10   last decrease= NA 
24   19   2   -10   last decrease= 2.304774e-07 
Yuan/Dai cycle reset
24   20   1   -10   last decrease= NA 
26   21   2   -10   last decrease= 4.530674e-08 
Yuan/Dai cycle reset
26   22   1   -10   last decrease= NA 
28   23   2   -10   last decrease= 8.904914e-09 
Yuan/Dai cycle reset
28   24   1   -10   last decrease= NA 
30   25   2   -10   last decrease= 1.888683e-09 
Very small gradient -- gradsqr = 9.84687951716435e-14 
Rcgmin seems to have converged 
Warning message:
In Rcgminu(par, fn, gr, control = control, ...) :
  A NULL gradient function is being replaced by numDeriv 'grad()'for Rcgmin
> print(ansnegmax)
$par
[1] 1.000000 1.999423 2.998846 3.998268 4.997691 5.997114

$value
[1] -10

$counts
[1] 30 26

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> 
> 
> #####################  From Rvmmin.Rd
> cat("test bounds and masks\n")
test bounds and masks
> nn<-4
> startx<-rep(pi,nn)
> lo<-rep(2,nn)
> up<-rep(10,nn)
> grbds1<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo,upper=up) 
> print(grbds1)
$par
[1]  2.000000  2.000000  3.181997 10.000000

$value
[1] 556.2391

$counts
[1] 34 24

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
[1] -3 -3  1 -1

> 
> cat("test lower bound only\n")
test lower bound only
> nn<-4
> startx<-rep(pi,nn)
> lo<-rep(2,nn)
> grbds2<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo) 
> print(grbds2)
$par
[1]  2.000000  2.000000  3.318724 10.914782

$value
[1] 553.0761

$counts
[1] 1125  156

$convergence
[1] 1

$message
[1] "Too many function evaluations (> 1118) "

$bdmsk
[1] -3 -3  1  1

> 
> cat("test lower bound single value only\n")
test lower bound single value only
> nn<-4
> startx<-rep(pi,nn)
> lo<-2
> up<-rep(10,nn)
> grbds3<-Rcgmin(startx,genrose.f, gr=genrose.g,lower=lo) 
> print(grbds3)
$par
[1]  2.000000  2.000000  3.318724 10.914782

$value
[1] 553.0761

$counts
[1] 1125  156

$convergence
[1] 1

$message
[1] "Too many function evaluations (> 1118) "

$bdmsk
[1] -3 -3  1  1

> 
> cat("test upper bound only\n")
test upper bound only
> nn<-4
> startx<-rep(pi,nn)
> lo<-rep(2,nn)
> up<-rep(10,nn)
> grbds4<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=up) 
> print(grbds4)
$par
[1] 1 1 1 1

$value
[1] 1

$counts
[1] 92 43

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
[1] 1 1 1 1

> 
> cat("test upper bound single value only\n")
test upper bound single value only
> nn<-4
> startx<-rep(pi,nn)
> grbds5<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=10) 
> print(grbds5)
$par
[1] 1 1 1 1

$value
[1] 1

$counts
[1] 92 43

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
[1] 1 1 1 1

> 
> 
> 
> cat("test masks only\n")
test masks only
> nn<-6
> bd<-c(1,1,0,0,1,1)
> startx<-rep(pi,nn)
> grbds6<-Rcgmin(startx,genrose.f, gr=genrose.g,bdmsk=bd) 
> print(grbds6)
$par
[1]  1.331105  1.771839  3.141593  3.141593  5.890350 34.362593

$value
[1] 7268.939

$counts
[1] 1332  179

$convergence
[1] 1

$message
[1] "Too many function evaluations (> 1323) "

$bdmsk
[1] 1 1 0 0 1 1

> 
> cat("test upper bound on first two elements only\n")
test upper bound on first two elements only
> nn<-4
> startx<-rep(pi,nn)
> upper<-c(10,8, Inf, Inf)
> grbds7<-Rcgmin(startx,genrose.f, gr=genrose.g,upper=upper) 
> print(grbds7)
$par
[1] 1 1 1 1

$value
[1] 1

$counts
[1] 91 43

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
[1] 1 1 1 1

> 
> 
> cat("test lower bound on first two elements only\n")
test lower bound on first two elements only
> nn<-4
> startx<-rep(0,nn)
> lower<-c(0,1.1, -Inf, -Inf)
> grbds8<-Rcgmin(startx,genrose.f,genrose.g,lower=lower, control=list(maxit=2000)) 
Warning messages:
1: In Rcgminb(par, fn, gr, lower = lower, upper = upper, bdmsk = bdmsk,  :
  x[1], set 0 to lower bound = 0
2: In Rcgminb(par, fn, gr, lower = lower, upper = upper, bdmsk = bdmsk,  :
  x[2], set 0 to lower bound = 1.1
> print(grbds8)
$par
[1] 0.000000 1.100000 1.197717 1.430224

$value
[1] 122.2511

$counts
[1] 57 23

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

$bdmsk
[1]  1 -3  1  1

> 
> cat("test n=1 problem using simple squares of parameter\n")
test n=1 problem using simple squares of parameter
> 
> sqtst<-function(xx) {
+    res<-sum((xx-2)*(xx-2))
+ }
> 
> gsqtst<-function(xx) {
+     gg<-2*(xx-2)
+ }
> 
> ######### One dimension test
> nn<-1
> startx<-rep(0,nn)
> onepar<-Rcgmin(startx,sqtst,  gr=gsqtst,control=list(trace=1)) 
Rcgminu -- J C Nash 2009 - unconstrained version CG min
an R implementation of Alg 22 with Yuan/Dai modification
Initial function value= 4 
Initial fn= 4 
1   0   1   4   last decrease= NA 
*4   1   2   1.774937e-30   last decrease= 4 
Very small gradient -- gradsqr = 7.09974814698911e-30 
Rcgmin seems to have converged 
> print(onepar)
$par
[1] 2

$value
[1] 1.774937e-30

$counts
[1] 4 2

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> 
> cat("Suppress warnings\n")
Suppress warnings
> oneparnw<-Rcgmin(startx,sqtst,  gr=gsqtst,control=list(dowarn=FALSE,trace=1)) 
Rcgminu -- J C Nash 2009 - unconstrained version CG min
an R implementation of Alg 22 with Yuan/Dai modification
Initial function value= 4 
Initial fn= 4 
1   0   1   4   last decrease= NA 
*4   1   2   1.774937e-30   last decrease= 4 
Very small gradient -- gradsqr = 7.09974814698911e-30 
Rcgmin seems to have converged 
> print(oneparnw)
$par
[1] 2

$value
[1] 1.774937e-30

$counts
[1] 4 2

$convergence
[1] 0

$message
[1] "Rcgmin seems to have converged"

> 
> 
> 
> 
> 
> 
> dev.off()
null device 
          1 
>