Last data update: 2014.03.03

R: Model Predictions and Diagnostics
model.diagnosticsR Documentation

Model Predictions and Diagnostics

Description

Takes model object and makes predictions, runs model diagnostics, and creates graphs and tables of the results.

Usage

model.diagnostics(model.obj = NULL, qdata.trainfn = NULL, qdata.testfn = NULL, 
folder = NULL, MODELfn = NULL, response.name = NULL, unique.rowname = NULL,
 diagnostic.flag=NULL, seed = NULL, prediction.type=NULL, MODELpredfn = NULL, 
 na.action = NULL, v.fold = 10, device.type = NULL, DIAGNOSTICfn = NULL, 
 res=NULL, jpeg.res = 72, device.width = 7,  device.height = 7, units="in", 
 pointsize=12, cex=par()$cex, req.sens, req.spec, FPC, FNC, quantiles=NULL, 
 all=TRUE, subset = NULL, weights = NULL, mtry = NULL, controls = NULL, 
 xtrafo = NULL, ytrafo = NULL, scores = NULL, n.trees = NULL)

Arguments

model.obj

R model object. The model object to use for prediction. The model object must be of type "RF" (random forest), "QRF" (quantile random forest), "CF" (conditional forest), or "SGB" (stochastic gradient boosting).

qdata.trainfn

String. The name (full path or base name with path specified by folder) of the training data file used for building the model (file should include columns for both response and predictor variables). The file must be a comma-delimited file *.csv with column headings. qdata.trainfn can also be an R dataframe. If predictions will be made (predict = TRUE or map=TRUE) the predictor column headers must match the names of the raster layer files, or a rastLUT must be provided to match predictor columns to the appropriate raster and band. If qdata.trainfn = NULL (the default), a GUI interface prompts user to browse to the training data file.

qdata.testfn

String. The name (full path or base name with path specified by folder) of the independent data set for testing (validating) the model's predictions. The file must be a comma-delimited file ".csv" with column headings and the column headings must be the same as those in the training data file. qdata.testfn can also be an R dataframe. If qdata.testfn = NULL (default), a GUI interface asks user if there is a test set available, then prompts user to browse to the test data file. If no test set is desired (for example, cross-fold validation will be performed, or for RF models, Out-Of-Bag estimation, set qdata.testfn = FALSE. If no test set is given, and qdata.testfn is not set to FALSE, the GUI interface asks if a proportion of the data should be set aside as an independent test set. If this is desired, the user will be prompted to specify the proportion to set aside as test data, and two new data files will be generated in the out put folder. The new file names will be the original data file name with "_train" and "_test" appended to the end of the file names.

folder

String. The folder used for all output from predictions and/or maps. Do not add ending slash to path string. If folder = NULL (default), a GUI interface prompts user to browse to a folder. To use the working directory, specify folder = getwd().

MODELfn

String. The file name to use to save the generated model object. If MODELfn = NULL (the default), a default name is generated by pasting model.type_response.type_response.name. If the other output filenames are left unspecified, MODELfn will be used as the basic name to generate other output filenames. The filename can be the full path, or it can be the simple basename, in which case the output will be to the folder specified by folder.

response.name

String. The name of the response variable used to build the model. The response.name must be column name from the training/test data files. If the model.obj was constructed in ModelMap with the model.build() function, then the model.diagnostics() can extract the response.name from the model.obj. If the model was constructed outside of ModelMap the you may need to specify the response.name. In particular, if a SGB model was constructed with the aid of Elith's code, it is necessary to specify the response.name argument, as all models constructed with this code are given a response name of "y.data". If the response.name argument differs from the response name in the model.obj, the specified argument is giver preference, and a warning generated.

unique.rowname

String. The name of the unique identifier used to identify each row in the training data. If unique.rowname = NULL, a GUI interface prompts user to select a variable from the list of column names from the training data file. If unique.rowname = FALSE, a variable is generated of numbers from 1 to nrow(qdata) to index each row.

diagnostic.flag

String. The name of a column used to identify a subset of rows in the training data or test data to use for model diagnostics. This column must be either a logical vector (TRUE and FALSE) or a vector of zeros ond ones (where 0=FALSE and 1=TRUE. If this argument is used model diagnostics that depend on predicted and observed values will be calculated from a subset of the training or test data. These include confusion matrix and threshold criteria for binary response models and the scatterplot for continuous response models. The output file of predicted and observed values will have an aditional column, indicating which rows were used in the diagnostic calculations. Note that for cross validation, the entire training dataset will be used to create cross validation predictions, but that only the predictions on the the rows indicated by diagnostic.flag will be used for the diagnostics.

seed

Integer. The number used to initialize randomization to build RF or SGB models. If you want to produce the same model later, use the same seed. If seed = NULL (the default), a new seed is created each run.

prediction.type

String. Prediction type. "TEST", "CV", "OOB" or "TRAIN". If predict = "TEST", validation predictions will be made on the test set provided by qdata.testfn. If predict = "CV", cross validation will be used on the training data provided by qdata.trainfn. If model.obj is a Random Forest model and predict = "OOB" the Out-of-Bag predictions will be calculated on the training data. If model.obj is a Stochastic Gradient Boosting model and predict = "TRAIN" the predictions will be calculated on the training data, but these predictions should be used with caution as this will lead to over optimistic estimates of model quality. A *.csv file of the unique id, observed, and predicted values is generated and put in the specified (or default) folder.

MODELpredfn

String. Model validation. A character string used to construct the output file names for the validation diagnostics, for example the prediction *.csv file, and the graphics *.jpg, *.pdf and *.ps files. The filename can be the full path, or it can be the simple basename, in which case the output will be to the folder specified by folder. If MODELpredfn = NULL (the default), a default name is created by pasting modelfn and "_pred".

na.action

String. Model validation. Specifies the action to take if there are NA values in the predictor data or if there is a level or class of a categorical predictor variable in the validation test set, but not in the training data set. By default, model.daignostics() will use the same na.action as was given to model.build. There are 2 options: (1) na.action = "na.omit" where any data point with NA or any new levels for any of the factored predictors is removed from the data; (2) na.action = "na.roughfix" where a missing categorical predictor is replaced with the most common category, and a missing continuous predictor is replaced with the median. Note: data points with missing response values will always be omitted.

v.fold

Integer (or logical FALSE). Model validation. The number of cross validation folds to use when making validation predictions on the training data. Only used if prediction.type = "CV".

device.type

String or vector of strings. Model validation. One or more device types for graphical output from model validation diagnostics.

Current choices:

"default" default graphics device
"jpeg" *.jpg files
"none" no graphics device generated
"pdf" *.pdf files
"png" *.png files
"postscript" *.ps files
"tiff" *.tif files
DIAGNOSTICfn

String. Model validation. Name used as base to create names for output files from model validation diagnostics. The filename can be the full path, or it can be the simple basename, in which case the output will be to the folder specified by folder. Defaults to DIAGNOSTICfn = MODELfn followed by the appropriate suffixes (i.e. ".csv", ".jpg", etc...).

res

Integer. Model validation. Pixels per inch for jpeg, png, and tiff plots. The default is 72dpi, good for on screen viewing. For printing, suggested setting is 300dpi.

jpeg.res

Integer. Model validation. Deprecated. Ignored unless res not provided.

device.width

Integer. Model validation. The device width for diagnostic plots in inches.

device.height

Integer. Model validation. The device height for diagnostic plots in inches.

units

Model validation. The units in which device.height and device.width are given. Can be "px" (pixels), "in" (inches, the default), "cm" or "mm".

pointsize

Integer. Model validation. The default pointsize of plotted text, interpreted as big points (1/72 inch) at res ppi

cex

Integer. Model validation. The cex for diagnostic plots.

req.sens

Numeric. Model validation. The required sensitivity for threshold optimization for binary response model evaluation.

req.spec

Numeric. Model validation. The required specificity for threshold optimization for binary response model evaluation.

FPC

Numeric. Model validation. The False Positive Cost for threshold optimization for binary response model evaluation.

FNC

Numeric. Model validation. The False Negative Cost for threshold optimization for binary response model evaluation.

quantiles

Numeric Vector. QRF models. The quantiles to predict. A numeric vector with values between zero and one. If model was built without specifying quantiles, quantile importance can not be calculated, but quantiles can still be used to specify prediction quantiles. If model was built with quantiles specified, then the model quantiles will be used for importance graph. If quantiles are not specified for model building or diagnostics, prediction quantiles will default to quantiles=c(0.1,0.5,0.9)

all

Logical. QRF models. all=TRUE uses all observations for prediction. all=FALSE uses only a certain number of observations per node for prediction (set with argument obs). Unlike in the quantredForest package itself, the default in ModelMap is all=TRUE, to more closely parallel ordinary random forest models.

subset

CF models. NOT SUPPORTED. Only needed for prediction.type="CV" for CF models. An optional vector specifying a subset of observations to be used in the fitting process. Note: subset is not yet supported for cross validation diagnostics.

weights

CF models. NOT SUPPORTED. Only needed for prediction.type="CV" for CF models. An optional vector of weights to be used in the fitting process. Non-negative integer valued weights are allowed as well as non-negative real weights. Observations are sampled (with or without replacement) according to probabilities weights/sum(weights). The fraction of observations to be sampled (without replacement) is computed based on the sum of the weights if all weights are integer-valued and based on the number of weights greater zero else. Alternatively, weights can be a double matrix defining case weights for all ncol(weights) trees in the forest directly. This requires more storage but gives the user more control. Note: weights is not yet supported for cross validation diagnostics.

mtry

Integer. Only needed for prediction.type="CV" for CF models (for RF and QRF models mtry will be determined from the model object). Number of variables to try at each node of Random Forest trees.

controls

CF models. Only needed for prediction.type="CV" for CF models. An object of class ForestControl-class, which can be obtained using cforest_control (and its convenience interfaces cforest_unbiased and cforest_classical). If controls is specified, then stand alone arguments mtry and ntree ignored and these parameters must be specified as part of the controls argument. If controls not specified, model.build defaults to cforest_unbiased(mtry=mtry, ntree=ntree) with the values of mtry and ntree specified by the stand alone arguments.

xtrafo

CF models. Only needed for prediction.type="CV" for CF models. A function to be applied to all input variables. By default, the ptrafo function is applied.

ytrafo

CF models. Only needed for prediction.type="CV" for CF models. A function to be applied to all response variables. By default, the ptrafo function is applied.

scores

CF models. NOT SUPPORTED. Only needed for prediction.type="CV" for CF models. An optional named list of scores to be attached to ordered factors. Note: scores is not yet supported for cross validation diagnostics.

n.trees

Integer. SGB models. The number of stochastic gradient boosting trees for an SGB model. If n.trees=NULL (the default) the model creation code will increase the number of trees 100 at a time until OOB error rate stops improving. The gbm function gbm.perf() will be used to select from the total calculated trees, the best number of trees for model predictions, with argument method="OOB". The gbm package warns that OOB generally underestimates the optimal number of iterations although predictive performance is reasonably competitive.

Details

model.diagnostics()takes model object and makes predictions, runs model diagnostics, and creates graphs and tables of the results.

model.diagnostics() can be run in a traditional R command mode, where all arguments are specified in the function call. However it can also be used in a full push button mode, where you type in the simple command model.map(), and GUI pop up windows will ask questions about the type of model, the file locations of the data, etc...

When running model.map() on non-Windows platforms, file names and folders need to be specified in the argument list, but other pushbutton selections are handled by the select.list() function, which is platform independent.

Diagnostic predictions are made my one of four methods, and a text file is generated consisting of three columns: Observation ID, observed values and predicted values. If predition.type = "CV") an additional column indicates which cross-fold each observation fell into. If the models response type is categorical then in addition a column giving the category predicted by majority vote, there are also categories for each possible response category giving the proportion of trees that predicted that category.

A variable importance graph is made. If response.type = "categorical", category specific graphs are generated for variable importance. These show how much the model accuracy for each category is affected when the values of each predictor variable is randomly permuted.

The package corrplot is used to generate a plot of correlation between predictor variables. If there are highly correlated predictor variables, then the variable importances of "RF", "QRF", "SGB" and "QSGB" models need to be interpreted with care, and users may want to consider looking at the conditional variable importances available for "CF" models produced by the party package.

If model.type = "RF", the OOB error is plotted as a function of number of trees in the model. If response.type = "binary" or If response.type = "categorical" category specific graphs are generated for OOB error as a function of number of trees.

If response.type = "binary", a summary graph is made using the PresenceAbsence package and a *.csv spreadsheets are created of optimized thresholds by several methods with their associated error statistics, and predicted prevalence.

If response.type = "continuous" a scatterplot of observed vs. predicted is created with a simple linear regression line. The graph is labeled with slope and intercept of this line as well as Pearson's and Spearman's correlation coefficients.

If response.type = "categorical", a confusion matrix is generated, that includes erros of ommission and comission, as well as Kappa, Percent Correctly Classified (PCC) and the Multicategorical Area Under the Curve (MAUC) as defined by Hand and Till (2001) and calculated by the package HandTill2001.

Value

The function will return a dataframe of the row ID, and the Observed and predicted values.

For Binary response models the predicted probability of presence is returned.

For Categorical Response models the predicted category (by majority vote) is returned as well as a column for each category giving the probability of that category. If necessary, make.names is applied to the categories to create valid column names.

For Continuous response models the predicted value is returned.

If prediction.type = "CV" the dataframe also includes a column indicating which cross-validation fold each datapoint was in.

Note

If you are running cross validation diagnostics on a CF model, the model parameters will NOT automatically be passed to model.diagnostics(). For cross validation, it is the users responsibility to be certain that the CF arguments are the same in model.build() and model.diagnostics().

Also, for some CF model parameters (subset, weights, and scores) ModelMap only provides OOB and independent test set diagnostics, and does not support cross validation diagnostics.

Author(s)

Elizabeth Freeman and Tracey Frescino

References

Breiman, L. (2001) Random Forests. Machine Learning, 45:5-32.

Elith, J., Leathwick, J. R. and Hastie, T. (2008). A working guide to boosted regression trees. Journal of Animal Ecology. 77:802-813.

Friedman, J.H. (2001). Greedy function approximation: a gradient boosting machine. Ann. Stat., 29(5):1189-1232.

Friedman, J.H. (2002). Stochastic gradient boosting. Comput. Stat. Data An., 38(4):367-378.

Hand, D. J., & Till, R. J. (2001). A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning, 45(2), 171-186.

Liaw, A. and Wiener, M. (2002). Classification and Regression by randomForest. R News 2(3), 18–22.

Ridgeway, G., (1999). The state of boosting. Comp. Sci. Stat. 31:172-181

See Also

get.test, model.build, model.mapmake

Examples

###########################################################################
############################# Run this set up code: #######################
###########################################################################

# set seed:
seed=38

# Define training and test files:

qdata.trainfn = system.file("extdata", "helpexamples","DATATRAIN.csv", package = "ModelMap")
qdata.testfn = system.file("extdata", "helpexamples","DATATEST.csv", package = "ModelMap")

# Define folder for all output:
folder=getwd()	

#identifier for individual training and test data points

unique.rowname="ID"


###########################################################################
############## Pick one of the following sets of definitions: #############
###########################################################################


########## Continuous Response, Continuous Predictors ############

#file name to store model:
MODELfn="RF_Bio_TC"				

#predictors:
predList=c("TCB","TCG","TCW")	

#define which predictors are categorical:
predFactor=FALSE	

# Response name and type:
response.name="BIO"
response.type="continuous"


########## binary Response, Continuous Predictors ############

#file name to store model:
MODELfn="RF_CONIFTYP_TC"				

#predictors:
predList=c("TCB","TCG","TCW")		

#define which predictors are categorical:
predFactor=FALSE

# Response name and type:
response.name="CONIFTYP"

# This variable is 1 if a conifer or mixed conifer type is present, 
# otherwise 0.

response.type="binary"


########## Continuous Response, Categorical Predictors ############

# In this example, NLCD is a categorical predictor.
#
# You must decide what you want to happen if there are categories
# present in the data to be predicted (either the validation/test set
# or in the image file) that were not present in the original training data.
# Choices:
#       na.action =  "na.omit"
#                    Any validation datapoint or image pixel with a value for any
#                    categorical predictor not found in the training data will be
#                    returned as NA.
#       na.action =  "na.roughfix"
#                    Any validation datapoint or image pixel with a value for any
#                    categorical predictor not found in the training data will have
#                    the most common category for that predictor substituted,
#                    and the a prediction will be made.

# You must also let R know which of the predictors are categorical, in other
# words, which ones R needs to treat as factors.
# This vector must be a subset of the predictors given in predList

#file name to store model:
MODELfn="RF_BIO_TCandNLCD"			

#predictors:
predList=c("TCB","TCG","TCW","NLCD")

#define which predictors are categorical:
predFactor=c("NLCD")

# Response name and type:
response.name="BIO"
response.type="continuous"



###########################################################################
########################### build model: ##################################
###########################################################################


### create model ###

model.obj = model.build( model.type="RF",
                       qdata.trainfn=qdata.trainfn,
                       folder=folder,		
                       unique.rowname=unique.rowname,	
                       MODELfn=MODELfn,
                       predList=predList,
                       predFactor=predFactor,
                       response.name=response.name,
                       response.type=response.type,
                       seed=seed,
                       na.action="na.roughfix"
)

###########################################################################
#### Then Run this code make validation predictions and diagnostics: ######
###########################################################################


### for Out-of-Bag predictions ###

MODELpredfn<-paste(MODELfn,"_OOB",sep="")
PRED.OOB<-model.diagnostics( 	model.obj=model.obj,
				qdata.trainfn=qdata.trainfn,
                   		folder=folder,		
                  	 	unique.rowname=unique.rowname,
                	# Model Validation Arguments
                   		prediction.type="OOB",
                   		MODELpredfn=MODELpredfn,
                   		device.type=c("default","jpeg","pdf"),	
                   		na.action="na.roughfix"
)
PRED.OOB

### for Cross-Validation predictions ###

#MODELpredfn<-paste(MODELfn,"_CV",sep="")
#PRED.CV<-model.diagnostics( 	model.obj=model.obj,
#                   		qdata.trainfn=qdata.trainfn,
#                   		folder=folder,		
#                   		unique.rowname=unique.rowname,
#                   		seed=seed,
#                	# Model Validation Arguments
#                   		prediction.type="CV",
#                   		MODELpredfn=MODELpredfn,
#                   		device.type=c("default","jpeg","pdf"),	
#                   		v.fold=10,
#                   		na.action="na.roughfix"
#)
#PRED.CV

### for Independent Test Set predictions ###

#MODELpredfn<-paste(MODELfn,"_TEST",sep="")
#PRED.TEST<-model.diagnostics( 	model.obj=model.obj,
#                   		qdata.testfn=qdata.testfn,
#                   		folder=folder,		
#                   		unique.rowname=unique.rowname,
#                	# Model Validation Arguments
#                   		prediction.type="TEST",
#                   		MODELpredfn=MODELpredfn,
#                   		device.type=c("default","jpeg","pdf"),	
#                   		na.action="na.roughfix"
#)
#PRED.TEST

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(ModelMap)
Loading required package: randomForest
randomForest 4.6-12
Type rfNews() to see new features/changes/bug fixes.
Loading required package: raster
Loading required package: sp
Loading required package: rgdal
rgdal: version: 1.1-10, (SVN revision 622)
 Geospatial Data Abstraction Library extensions to R successfully loaded
 Loaded GDAL runtime: GDAL 1.11.3, released 2015/09/16
 Path to GDAL shared files: /usr/share/gdal/1.11
 Loaded PROJ.4 runtime: Rel. 4.9.2, 08 September 2015, [PJ_VERSION: 492]
 Path to PROJ.4 shared files: (autodetected)
 Linking to sp version: 1.2-3 
> png(filename="/home/ddbj/snapshot/RGM3/R_CC/result/ModelMap/model.diagnostics.Rd_%03d_medium.png", width=480, height=480)
> ### Name: model.diagnostics
> ### Title: Model Predictions and Diagnostics
> ### Aliases: model.diagnostics
> ### Keywords: models
> 
> ### ** Examples
> 
> ###########################################################################
> ############################# Run this set up code: #######################
> ###########################################################################
> 
> # set seed:
> seed=38
> 
> # Define training and test files:
> 
> qdata.trainfn = system.file("extdata", "helpexamples","DATATRAIN.csv", package = "ModelMap")
> qdata.testfn = system.file("extdata", "helpexamples","DATATEST.csv", package = "ModelMap")
> 
> # Define folder for all output:
> folder=getwd()	
> 
> #identifier for individual training and test data points
> 
> unique.rowname="ID"
> 
> 
> ###########################################################################
> ############## Pick one of the following sets of definitions: #############
> ###########################################################################
> 
> 
> ########## Continuous Response, Continuous Predictors ############
> 
> #file name to store model:
> MODELfn="RF_Bio_TC"				
> 
> #predictors:
> predList=c("TCB","TCG","TCW")	
> 
> #define which predictors are categorical:
> predFactor=FALSE	
> 
> # Response name and type:
> response.name="BIO"
> response.type="continuous"
> 
> 
> ########## binary Response, Continuous Predictors ############
> 
> #file name to store model:
> MODELfn="RF_CONIFTYP_TC"				
> 
> #predictors:
> predList=c("TCB","TCG","TCW")		
> 
> #define which predictors are categorical:
> predFactor=FALSE
> 
> # Response name and type:
> response.name="CONIFTYP"
> 
> # This variable is 1 if a conifer or mixed conifer type is present, 
> # otherwise 0.
> 
> response.type="binary"
> 
> 
> ########## Continuous Response, Categorical Predictors ############
> 
> # In this example, NLCD is a categorical predictor.
> #
> # You must decide what you want to happen if there are categories
> # present in the data to be predicted (either the validation/test set
> # or in the image file) that were not present in the original training data.
> # Choices:
> #       na.action =  "na.omit"
> #                    Any validation datapoint or image pixel with a value for any
> #                    categorical predictor not found in the training data will be
> #                    returned as NA.
> #       na.action =  "na.roughfix"
> #                    Any validation datapoint or image pixel with a value for any
> #                    categorical predictor not found in the training data will have
> #                    the most common category for that predictor substituted,
> #                    and the a prediction will be made.
> 
> # You must also let R know which of the predictors are categorical, in other
> # words, which ones R needs to treat as factors.
> # This vector must be a subset of the predictors given in predList
> 
> #file name to store model:
> MODELfn="RF_BIO_TCandNLCD"			
> 
> #predictors:
> predList=c("TCB","TCG","TCW","NLCD")
> 
> #define which predictors are categorical:
> predFactor=c("NLCD")
> 
> # Response name and type:
> response.name="BIO"
> response.type="continuous"
> 
> 
> 
> ###########################################################################
> ########################### build model: ##################################
> ###########################################################################
> 
> 
> ### create model ###
> 
> model.obj = model.build( model.type="RF",
+                        qdata.trainfn=qdata.trainfn,
+                        folder=folder,		
+                        unique.rowname=unique.rowname,	
+                        MODELfn=MODELfn,
+                        predList=predList,
+                        predFactor=predFactor,
+                        response.name=response.name,
+                        response.type=response.type,
+                        seed=seed,
+                        na.action="na.roughfix"
+ )
mtry = 1  OOB error = 1819.048 
Searching left ...
Searching right ...
mtry = 2 	OOB error = 1907.638 
-0.04870136 0.05 
> 
> ###########################################################################
> #### Then Run this code make validation predictions and diagnostics: ######
> ###########################################################################
> 
> 
> ### for Out-of-Bag predictions ###
> 
> MODELpredfn<-paste(MODELfn,"_OOB",sep="")
> PRED.OOB<-model.diagnostics( 	model.obj=model.obj,
+ 				qdata.trainfn=qdata.trainfn,
+                    		folder=folder,		
+                   	 	unique.rowname=unique.rowname,
+                 	# Model Validation Arguments
+                    		prediction.type="OOB",
+                    		MODELpredfn=MODELpredfn,
+                    		device.type=c("default","jpeg","pdf"),	
+                    		na.action="na.roughfix"
+ )
Error in dev.new(width = device.width, height = device.height, pointsize = pointsize) : 
  no suitable unused file name for pdf()
Calls: model.diagnostics ... correlation.function -> initialize.device -> dev.new
Execution halted