list containing lists (functions) of two vectors of equal length, named "args" and "vals": arguments sorted in ascending order and corresponding them values respectively
labels
list of output labels of the functinal observations
adc.args
Represents a function sample as a multidimensional (dimension="numFcn"+"numDer")
one averaging (instance = "avr") or evaluating (instance = "val") for that each function and it derivative on "numFcn"
(resp. "numDer") equal nonoverlapping covering intervals
First two named "args" and "vals" are arguments sorted in
ascending order and having same bounds for all functions and
corresponding them values respectively
instance
type of discretizing the functions:
"avr" - by avaraging over intervals of the same length
"val" - by taking values on equally-spaced grid
numFcn
number of function intervals
numDer
number of first-derivative intervals
Set numFcn and numDer to -1 to apply cross-validation.
classifier.type
the classifier which is used on the transformed space. The default value is 'ddalpha'.
cv.complete
T: apply complete cross-validation
F: restrict cross-validation by Vapnik-Chervonenkis bound
seed
the random seed. The dafault value seed=0 makes no changes.
...
additional parameters, passed to the classifier, selected with parameter classifier.type.
Details
The functional DDalpha-classifier is fast nonparametric procedure for classifying functional data. It consists of
a two-step transformation of the original data plus a classifier operating on a low-dimensional
hypercube. The functional data are first mapped into a finite-dimensional location-slope space
and then transformed by a multivariate depth function into the DD-plot, which is a subset of
the unit hypercube. This transformation yields a new notion of depth for functional data. Three
alternative depth functions are employed for this, as well as two rules for the final classification.
The resulting classifier is cross-validated over a small range of parameters only,
which is restricted by a Vapnik-Cervonenkis bound. The entire methodology does not involve
smoothing techniques, is completely nonparametric and allows to achieve Bayes optimality under
standard distributional settings. It is robust and efficiently computable.
Value
Trained functional DDalpha-classifier
References
Mozharovskyi, P. (2015), Contributions to Depth-based Classification and Computation of the Tukey Depth, Verlag Dr. Kovac (Hamburg).
Mosler, K., & Mozharovskyi, P. (2014). Fast DD-classification of functional data. arXiv preprint arXiv:1403.1158.
See Also
ddalphaf.classify for classification using functional DDalpha-classifier,
compclassf.train to train the functional componentwise classifier,
dataf.* for functional data sets included in the package.
Examples
## Not run:
## load the Growth dataset
dataf = dataf.growth()
learn = c(head(dataf$dataf, 49), tail(dataf$dataf, 34))
labels= c(head(dataf$labels, 49), tail(dataf$labels, 34))
test = tail(head(dataf$dataf, 59), 10) # elements 50:59. 5 girls, 5 boys
c = ddalphaf.train (learn, labels, classifier.type = "ddalpha")
classified = ddalphaf.classify(test, c)
print(unlist(classified))
## End(Not run)