snlpca(lambda, U, x, gamma, q = length(lambda), center,
type = c("exact", "nn"), sort = TRUE)
Arguments
lambda
optional vector of eigenvalues.
U
matrix of eigenvectors (PC) stored in columns.
x
new data vector.
gamma
vector of learning rates.
q
number of eigenvectors to compute.
center
optional centering vector for x.
type
algorithm implementation: "exact" or "nn" (neural network).
sort
Should the new eigenpairs be sorted?
Details
The vector gamma determines the weight placed on the new data in updating each PC. For larger values of gamma, more weight is placed on x and less on U. A common choice is of the form c/n, with n the sample size and c a suitable positive constant. Argument gamma can be specified as a single positive number (common to all PCs) or as a vector of length q.
If sort is TRUE and lambda is not missing, the updated eigenpairs are sorted by decreasing eigenvalue. Otherwise, they are not sorted.
Value
A list with components
values
updated eigenvalues or NULL.
vectors
updated (rotated) eigenvectors.
Note
The Subspace Network Learning PCA can be implemented exactly or through a neural network. The latter is less accurate but much faster. Unlike the GHA and SGA algorithms, the SNL algorithm does not consistently estimate principal components. It provides only the linear space spanned by the PCs.
References
Oja (1992). Principal components, Minor components, and linear neural networks. Neural Networks.
See Also
ghapca, sgapca
Examples
## Initialization
n <- 1e4 # sample size
n0 <- 5e3 # initial sample size
d <- 10 # number of variables
q <- d # number of PC to compute
x <- matrix(runif(n*d), n, d)
x <- x %*% diag(sqrt(12*(1:d)))
# The eigenvalues of x are close to 1, 2, ..., d
# and the corresponding eigenvectors are close to
# the canonical basis of R^d
## SNL PCA
xbar <- colMeans(x[1:n0,])
pca <- batchpca(x[1:n0,], q, center=xbar, byrow=TRUE)
for (i in (n0+1):n) {
xbar <- updateMean(xbar, x[i,], i-1)
pca <- snlpca(pca$values, pca$vectors, x[i,], 1/i, q, xbar)
}