data matrix. Unlike glasso, this function requires the original data, not just the covariance matrix.
lambda
a non-negative tuning parameter
subset
An ncol(x) by ncol(x) logical matrix, giving a subset of edges to test.
penalize.diagonal
logical. Whether or not to penalize the diagonal in the graphical lasso. Defaults
to FALSE.
tol
convergence tolerance for glasso or glmnet
...
for mbscore, additional arguments to be passed to lassoscore
Details
This function tests for pairwise association between features, using the graphical lasso (glassoscore) or neighborhood selection (mbscore). Tests are based on the penalized score statistic T_λ, described in Voorman et al (2014). Note that a feature is non-zero in the (graphical) lasso solution if and only if
| T_λ | > λ √ n,
where T_λ is penalized the score statistic.
Calculating the variance of T_λ can be computationally expensive for glassoscore. If there are q non-zero parameters in the graphical lasso solution, it will (roughly) require construction, and inversion, of a q x q matrix for each of the q non-zero parameters. That is, complexity is roughly q^4.
For mbscore, the results are typically not symmetric. For instance, p.sand[-i,i] contains the p-values produced by lassoscore(x[,i],x[,-i],lambda), i.e. using x[,i] as the outcome variable, and thus p.sand[i,-i] contains p-values associated with feature i when used as the a predictor variable.
Value
for an object of class either ‘glassoscore’ or ‘mbscore’, containing
scores
the penalized score statistics
scorevar.model
the variance of the score statistics, estimated using a model-based variance estimate
scorevar.sand
the variance of the score statistcs, using a conservative variance estimate
p.model
p-value, using the model-based variance
p.sand
p-value, using the sandwich variance
beta
for mbscore, the beta[-i,i] contains the coefficients from lasso regression of x[,i] on x[,-i].
In addition, glassoscore contains the output from ‘glasso’ applied to x.