Last data update: 2014.03.03

R: Comparison of Binary Diagnostic Tests in a Paired Study...
DTComPair-packageR Documentation

Comparison of Binary Diagnostic Tests in a Paired Study Design

Description

This package contains functions to compare the accuracy of two binary diagnostic tests in a “paired” study design, i.e. when each test is applied to each subject in the study.

Details

Package: DTComPair
Type: Package
Version: 1.0.3
Date: 2014-02-15
License: GNU >=2

The accuracy measures that can be compared in the present version are sensitivity, specificity, positive and negative predictive values, and positive and negative diagnostic likelihood ratios.

It is required that results from a binary gold-standard test are also available.

Methods for comparison of sensitivity and specificity: McNemar test (McNemar, 1947) and exact binomial test. Further, several methods to compute confidence intervals for differences in sensitivity and specificity are implemented.

Methods for comparison of positive and negative predictive values: generalized score statistic (Leisenring et al., 2000), weighted generalized score statistic (Kosinski, 2013) and comparison of relative predictive values (Moskowitz and Pepe, 2006).

Methods for comparison of positive and negative diagnostic likelihood ratios: a regression model approach (Gu and Pepe, 2009).

For a general introduction into the evaluation of diagnostic tests see e.g. Pepe (2003) or Zhou et al. (2011).

Author(s)

Christian Stock, Thomas Hielscher

Maintainer: Christian Stock <stock@imbi.uni-heidelberg.de>

References

Gu, W. and Pepe, M. S. (2009). Estimating the capacity for improvement in risk prediction with a marker. Biostatistics, 10(1):172-86.

Kosinski, A.S. (2013). A weighted generalized score statistic for comparison of predictive values of diagnostic tests. Stat Med, 32(6):964-77.

Leisenring, W., Alonzo, T., and Pepe, M.S. (2000). Comparisons of predictive values of binary medical diagnostic tests for paired designs. Biometrics, 56(2):345-51.

McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-7.

Moskowitz, C.S., and Pepe, M.S. (2006). Comparing the predictive values of diagnostic tests: sample size and analysis for paired study designs. Clin Trials, 3(3):272-9.

Pepe, M. (2003). The statistical evaluation of medical tests for classifcation and prediction. Oxford Statistical Science Series. Oxford University Press, 1st edition.

Zhou, X., Obuchowski, N., and McClish, D. (2011). Statistical Methods in Diagnostic Medicine. Wiley Series in Probability and Statistics. John Wiley & Sons, Hoboken, New Jersey, 2nd edition.

See Also

Data management functions: tab.1test, tab.paired, read.tab.paired, generate.paired and represent.long.

Computation of standard accuracy measures for a single test: acc.1test and acc.paired.

Comparison of sensitivity and specificity: sesp.mcnemar, sesp.exactbinom and sesp.diff.ci.

Comparison of positive and negative predictive values: pv.gs, pv.wgs and pv.rpv.

Comparison of positive and negative diagnostic likelihood ratios: dlr.regtest and DLR.

Examples

data(Paired1) # Hypothetical study data 
hsd <- tab.paired(d=d, y1=y1, y2=y2, data=Paired1)
acc.paired(hsd)
sesp.mcnemar(hsd)
pv.rpv(hsd)
dlr.regtest(hsd)

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(DTComPair)
Loading required package: gee
Loading required package: PropCIs
> png(filename="/home/ddbj/snapshot/RGM3/R_CC/result/DTComPair/dtcompair-package.rd_%03d_medium.png", width=480, height=480)
> ### Name: DTComPair-package
> ### Title: Comparison of Binary Diagnostic Tests in a Paired Study Design
> ### Aliases: DTComPair-package DTComPair
> ### Keywords: package
> 
> ### ** Examples
> 
> data(Paired1) # Hypothetical study data 
> hsd <- tab.paired(d=d, y1=y1, y2=y2, data=Paired1)
> acc.paired(hsd)
Diagnostic accuracy of test 'y1'

(Estimates, standard errors and 95%-confidence intervals)

                 Est.         SE  Lower CL  Upper CL
Sensitivity 0.8802661 0.01528718 0.8503038 0.9102284
Specificity 0.6781609 0.02891782 0.6214830 0.7348388
PPV         0.8253638 0.01731081 0.7914353 0.8592924
NPV         0.7662338 0.02784617 0.7116563 0.8208113

           Est.  SE (log)  Lower CL  Upper CL
PDLR  2.7351124 0.0915147 2.2860079 3.2724472
NDLR  0.1765568 0.1346088 0.1356142 0.2298601

----------------------------------------------------------
Diagnostic accuracy of test 'y2'

(Estimates, standard errors and 95%-confidence intervals)

                 Est.         SE  Lower CL  Upper CL
Sensitivity 0.7560976 0.02022128 0.7164646 0.7957305
Specificity 0.7969349 0.02490054 0.7481307 0.8457390
PPV         0.8654822 0.01718980 0.8317908 0.8991736
NPV         0.6540881 0.02667395 0.6018081 0.7063680

           Est.  SE (log)  Lower CL  Upper CL
PDLR  3.7234238 0.1255060 2.9114648 4.7618247
NDLR  0.3060507 0.0885996 0.2572629 0.3640906
> sesp.mcnemar(hsd)
$sensitivity
$sensitivity$test1
[1] 0.8802661

$sensitivity$test2
[1] 0.7560976

$sensitivity$diff
[1] -0.1241685

$sensitivity$test.statistic
[1] 31.36

$sensitivity$p.value
[1] 2.143518e-08


$specificity
$specificity$test1
[1] 0.6781609

$specificity$test2
[1] 0.7969349

$specificity$diff
[1] 0.1187739

$specificity$test.statistic
[1] 12.81333

$specificity$p.value
[1] 0.0003441579


$method
[1] "mcnemar"

> pv.rpv(hsd)
$ppv
$ppv$test1
[1] 0.8253638

$ppv$test2
[1] 0.8654822

$ppv$rppv
[1] 0.9536462

$ppv$se.log.rppv
[1] 0.01991247

$ppv$lcl.rppv
[1] 0.9171445

$ppv$ucl.rppv
[1] 0.9916006

$ppv$test.statistic
[1] -2.383559

$ppv$p.value
[1] 0.01714612


$npv
$npv$test1
[1] 0.7662338

$npv$test2
[1] 0.6540881

$npv$rnpv
[1] 1.171454

$npv$se.log.rnpv
[1] 0.03783679

$npv$lcl.rnpv
[1] 1.087723

$npv$ucl.rnpv
[1] 1.261629

$npv$test.statistic
[1] 4.182314

$npv$p.value
[1] 2.885568e-05


$method
[1] "relative predictive values (rpv)"

$alpha
[1] 0.05

> dlr.regtest(hsd)
$pdlr
$pdlr$test1
[1] 2.735112

$pdlr$test2
[1] 3.723424

$pdlr$ratio
[1] 0.7345692

$pdlr$se.log
[1] 0.1326086

$pdlr$test.statistic
[1] -2.326177

$pdlr$p.value
[1] 0.0200091

$pdlr$lcl
[1] 0.5664428

$pdlr$ucl
[1] 0.9525973


$ndlr
$ndlr$test1
[1] 0.1765568

$ndlr$test2
[1] 0.3060507

$ndlr$ratio
[1] 0.5768875

$ndlr$se.log
[1] 0.137376

$ndlr$test.statistic
[1] -4.004396

$ndlr$p.value
[1] 6.217627e-05

$ndlr$lcl
[1] 0.4407136

$ndlr$ucl
[1] 0.7551371


$alpha
[1] 0.05

$method
[1] "DLR regression model (regtest)"

> 
> 
> 
> 
> 
> dev.off()
null device 
          1 
>