SemiParTRIVProbit can be used to fit trivariate binary models where the linear predictors of the two main equations can
be flexibly specified using
parametric and regression spline components.
In the basic setup this will be a list of three formulas. s terms
are used to specify smooth smooth functions of
predictors. SemiParBIVProbit supports the use shrinkage smoothers for variable
selection purposes and more. See the examples below and the documentation of SemiParBIVProbit for further
details.
data
An optional data frame, list or environment containing the variables in the model. If not found in data, the
variables are taken from environment(formula), typically the environment from which SemiParBIVProbit is called.
weights
Optional vector of prior weights to be used in fitting.
subset
Optional vector specifying a subset of observations to be used in the fitting process.
Model
It indicates the type of model to be used in the analysis. Possible values are "T" (trivariate model),
"TSS" (trivariate model with double sample selection).
penCor
Type of penalty for correlation coefficients. Possible values are "unpen", "lasso", "ridge".
sp.penCor
Starting value for smoothing parameter of penCor.
fp
If TRUE then a fully parametric model with unpenalised regression splines if fitted. See the example below.
infl.fac
Inflation factor for the model degrees of freedom in the approximate AIC. Smoother models can be obtained setting
this parameter to a value greater than 1.
rinit
Starting trust region radius. The trust region radius is adjusted as the algorithm proceeds. See the documentation
of trust for further details.
rmax
Maximum allowed trust region radius. This may be set very large. If set small, the algorithm traces a steepest
descent path.
iterlimsp
A positive integer specifying the maximum number of loops to be performed before the smoothing parameter estimation
step is terminated.
tolsp
Tolerance to use in judging convergence of the algorithm when automatic smoothing parameter estimation is used.
gc.l
This is relevant when working with big datasets. If TRUE then the garbage collector is called more often than it is
usually done. This keeps the memory footprint down but it will slow down the routine.
parscale
The algorithm will operate as if optimizing objfun(x / parscale, ...) where parscale is a scalar. If missing then no
rescaling is done. See the
documentation of trust for more details.
extra.regI
If "t" then regularization as from trust is applied to the information matrix if needed.
If different from "t" then extra regularization is applied via the options "pC" (pivoted Choleski - this
will only work when the information matrix is semi-positive or positive definite) and "sED" (symmetric eigen-decomposition).
Details
This function fits trivariate models.
For sample selection models, if there are factors in the model, before fitting, the user has to ensure
that the numbers of factor variables' levels in the selected sample
are the same as those in the complete dataset. Even if a model could be fitted in such a situation,
the model may produce fits which are
not coherent with the nature of the correction sought. For more details see ?SemiParBIVProbit.
Value
The function returns an object of class SemiParTRIVProbit as described in SemiParTRIVProbitObject.
WARNINGS
Convergence failure may sometimes occur. Convergence can be checked using conv.check which provides some
information about
the score and information matrix associated with the fitted model. The former should be 0 and the latter positive definite.
SemiParTRIVProbit() will produce some warnings if there is a convergence issue.
In such a situation, the user may use some extra regularisation (see extra.regI) and/or
rescaling (see parscale). Penalising the correlations using argument penCor may help a lot in some situations as
in our experience in hard situations the correlation coefficients are typically the most difficult to estimate.
The above suggestions may help, especially the latter option. However, the user should also consider
looking into the proportions of 1 and 0 available for each event of the trivariate model. It may
be the case that certain events do not have many observations associated to them
which will make estimation very difficult.
As usual, model complexity plays a role.