Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Robust Bayesian Copas Selection Model for Quantifying and Correcting Publication Bias (2005.02930v3)

Published 6 May 2020 in stat.ME

Abstract: The validity of conclusions from meta-analysis is potentially threatened by publication bias. Most existing procedures for correcting publication bias assume normality of the study-specific effects that account for between-study heterogeneity. However, this assumption may not be valid, and the performance of these bias correction procedures can be highly sensitive to departures from normality. Further, there exist few measures to quantify the magnitude of publication bias based on selection models. In this paper, we address both of these issues. First, we explore the use of heavy-tailed distributions for the study-specific effects within a Bayesian hierarchical framework. The deviance information criterion (DIC) is used to determine the appropriate distribution to use for conducting the final analysis. Second, we develop a new measure to quantify the magnitude of publication bias based on Hellinger distance. Our measure is easy to interpret and takes advantage of the estimation uncertainty afforded naturally by the posterior distribution. We illustrate our proposed approach through simulation studies and meta-analyses on lung cancer and antidepressants. To assess the prevalence of publication bias, we apply our method to 1500 meta-analyses of dichotomous outcomes in the Cochrane Database of Systematic Reviews. Our methods are implemented in the publicly available R package RobustBayesianCopas.

Summary

We haven't generated a summary for this paper yet.