Last data update: 2014.03.03

R: Selection of Differential Distributions with Kullback Leibler...
KullbackLeiblerSelectionR Documentation

Selection of Differential Distributions with Kullback Leibler Distance

Description

Ranks features by largest Kullback-Leibler distance and chooses the features which have best resubstitution performance.

Usage

  ## S4 method for signature 'matrix'
KullbackLeiblerSelection(expression, classes, ...)
  ## S4 method for signature 'ExpressionSet'
KullbackLeiblerSelection(expression, datasetName,
                                      trainParams, predictParams, resubstituteParams, ...,
                                      selectionName, verbose = 3)

Arguments

expression

Either a matrix or ExpressionSet containing the training data. For a matrix, the rows are features, and the columns are samples.

classes

A vector of class labels.

datasetName

A name for the dataset used. Stored in the result.

trainParams

A container of class TrainParams describing the classifier to use for training.

predictParams

A container of class PredictParams describing how prediction is to be done.

resubstituteParams

An object of class ResubstituteParams describing the performance measure to consider and the numbers of top features to try for resubstitution classification.

...

Variables passed to getLocationsAndScales.

selectionName

A name to identify this selection method by. Stored in the result.

verbose

A number between 0 and 3 for the amount of progress messages to give. This function only prints progress messages if the value is 3.

Details

The distance is defined as 0.5 * (location1 - location2)^2 / scale1^2 + (location1 - location2)^2 / scale2^2 + scale1^2 / scale2^2 + scale2^2 / scale1^2

The subscripts denote the group which the parameter is calculated for.

Value

An object of class SelectResult or a list of such objects, if the classifier which was used for determining resubstitution error rate made a number of prediction varieties.

Author(s)

Dario Strbenac

Examples

  if(require(sparsediscrim))
  {
    # First 20 features have bimodal distribution for Poor class. Other 80 features have normal distribution for
    # both classes.
    genesMatrix <- sapply(1:25, function(sample) c(rnorm(20, sample(c(8, 12), 20, replace = TRUE), 1), rnorm(80, 10, 1)))
    genesMatrix <- cbind(genesMatrix, sapply(1:25, function(sample) rnorm(100, 10, 1)))
    classes <- factor(rep(c("Poor", "Good"), each = 25))
    KullbackLeiblerSelection(genesMatrix, classes, "Example",
                             trainParams = TrainParams(naiveBayesKernel, FALSE, doesTests = TRUE),
                             predictParams = PredictParams(function(){}, FALSE, getClasses = function(result) result),
                             resubstituteParams = ResubstituteParams(nFeatures = seq(10, 100, 10), performanceType = "balanced", better = "lower")
                             )
  }

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(ClassifyR)
Loading required package: Biobase
Loading required package: BiocGenerics
Loading required package: parallel

Attaching package: 'BiocGenerics'

The following objects are masked from 'package:parallel':

    clusterApply, clusterApplyLB, clusterCall, clusterEvalQ,
    clusterExport, clusterMap, parApply, parCapply, parLapply,
    parLapplyLB, parRapply, parSapply, parSapplyLB

The following objects are masked from 'package:stats':

    IQR, mad, xtabs

The following objects are masked from 'package:base':

    Filter, Find, Map, Position, Reduce, anyDuplicated, append,
    as.data.frame, cbind, colnames, do.call, duplicated, eval, evalq,
    get, grep, grepl, intersect, is.unsorted, lapply, lengths, mapply,
    match, mget, order, paste, pmax, pmax.int, pmin, pmin.int, rank,
    rbind, rownames, sapply, setdiff, sort, table, tapply, union,
    unique, unsplit

Welcome to Bioconductor

    Vignettes contain introductory material; view with
    'browseVignettes()'. To cite Bioconductor, see
    'citation("Biobase")', and for packages 'citation("pkgname")'.

Loading required package: BiocParallel
> png(filename="/home/ddbj/snapshot/RGM3/R_BC/result/ClassifyR/KullbackLeiblerSelection.Rd_%03d_medium.png", width=480, height=480)
> ### Name: KullbackLeiblerSelection
> ### Title: Selection of Differential Distributions with Kullback Leibler
> ###   Distance
> ### Aliases: KullbackLeiblerSelection
> ###   KullbackLeiblerSelection,matrix-method
> ###   KullbackLeiblerSelection,ExpressionSet-method
> 
> ### ** Examples
> 
>   if(require(sparsediscrim))
+   {
+     # First 20 features have bimodal distribution for Poor class. Other 80 features have normal distribution for
+     # both classes.
+     genesMatrix <- sapply(1:25, function(sample) c(rnorm(20, sample(c(8, 12), 20, replace = TRUE), 1), rnorm(80, 10, 1)))
+     genesMatrix <- cbind(genesMatrix, sapply(1:25, function(sample) rnorm(100, 10, 1)))
+     classes <- factor(rep(c("Poor", "Good"), each = 25))
+     KullbackLeiblerSelection(genesMatrix, classes, "Example",
+                              trainParams = TrainParams(naiveBayesKernel, FALSE, doesTests = TRUE),
+                              predictParams = PredictParams(function(){}, FALSE, getClasses = function(result) result),
+                              resubstituteParams = ResubstituteParams(nFeatures = seq(10, 100, 10), performanceType = "balanced", better = "lower")
+                              )
+   }
Loading required package: sparsediscrim
Selecting features by Kullback-Leibler divergence
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Fitting densities.
Calculating crossover points of class densities.
Calculating vertical differences between densities.
Calculating class scores and determining class labels.
Training and classification completed.
Prediction completed.
Prediction completed.
Prediction completed.
Prediction completed.
Features selected.
$`weighted=unweighted`
An object of class 'SelectResult'.
Dataset Name: Example.
Feature Selection Name: Kullback-Leibler Divergence.
Features Considered: 100.
Selections: List of length 1.
Selection Size : 40 features.

$`weighted=weighted,weight=crossover distance`
An object of class 'SelectResult'.
Dataset Name: Example.
Feature Selection Name: Kullback-Leibler Divergence.
Features Considered: 100.
Selections: List of length 1.
Selection Size : 40 features.

$`weighted=weighted,weight=height difference`
An object of class 'SelectResult'.
Dataset Name: Example.
Feature Selection Name: Kullback-Leibler Divergence.
Features Considered: 100.
Selections: List of length 1.
Selection Size : 60 features.

$`weighted=weighted,weight=sum differences`
An object of class 'SelectResult'.
Dataset Name: Example.
Feature Selection Name: Kullback-Leibler Divergence.
Features Considered: 100.
Selections: List of length 1.
Selection Size : 40 features.

> 
> 
> 
> 
> 
> dev.off()
null device 
          1 
>