Last data update: 2014.03.03

PubBias

Package: PubBias
Type: Package
Title: Performs simulation study to look for publication bias, using a
technique described by Ioannidis and Trikalinos; Clin Trials.
2007;4(3):245-53.
Depends: rmeta, R.utils
Version: 1.0
Date: 2013-11-18
Author: Simon Thornley
Maintainer: Simon Thornley <sithor@gmail.com>
Description: I adapted a method designed by Ioannidis and Trikalinos, which
compares the observed number of positive studies in a meta-analysis with
the expected number, if the summary measure of effect, averaged over the
individual studies, were assumed true. Excess in the observed number of
positive studies, compared to the expected, is taken as evidence of
publication bias. The observed number of positive studies, at a given level
for statistical significance, is calculated by applying Fisher's exact test
to the reported 2x2 table data of each constituent study, doubling the
Fisher one-sided P-value to make a two-sided test. The corresponding
expected number of positive studies was obtained by summing the statistical
powers of each study. The statistical power depended on a given measure of
effect which, here, was the pooled odds ratio of the meta-analysis was
used. By simulating each constituent study, with the given odds ratio, and
the same number of treated and non-treated as in the real study, the power
of the study is estimated as the proportion of simulated studies that are
positive, again by a Fisher's exact test. The simulated number of events in
the treated and untreated groups was done with binomial sampling. In the
untreated group, the binomial proportion was the percentage of actual
events reported in the study and, in the treated group, the binomial
sampling proportion was the untreated percentage multiplied by the risk
ratio which was derived from the assumed common odds ratio. The statistical
significance for judging a positive study may be varied and large
differences between expected and observed number of positive studies around
the level of 0.05 significance constitutes evidence of publication bias.
The difference between the observed and expected is tested by chi-square. A
chi-square test P-value for the difference below 0.05 is suggestive of
publication bias, however, a less stringent level of 0.1 is often used in
studies of publication bias as the number of published studies is usually
small.
License: GPL-3
Collate: 'process.R'
Packaged: 2013-11-19 20:55:32 UTC; stho069
NeedsCompilation: no
Repository: CRAN
Date/Publication: 2013-11-21 06:48:21

● Cran Task View: MetaAnalysis
1 images, 7 functions, 1 datasets
● Reverse Depends: 0

Install log

* installing to library '/home/ddbj/local/lib64/R/library'
* installing *source* package 'PubBias' ...
** package 'PubBias' successfully unpacked and MD5 sums checked
** R
** data
** preparing package for lazy loading
** help
*** installing help indices
  converting help for package 'PubBias'
    finding HTML links ... done
    BMort                                   html  
    ChisqTest_expect                        html  
    expected_events                         html  
    plot_chase_observed_expected            html  
    test.n                                  html  
    test.n.treated                          html  
    test.one.treated                        html  
    test_one                                html  
** building package indices
** testing if installed package can be loaded
* DONE (PubBias)
Making 'packages.html' ... done