Papers
Topics
Authors
Recent
2000 character limit reached

Empirical Bayes factors for common hypothesis tests (2301.11057v4)

Published 26 Jan 2023 in stat.ME, math.ST, and stat.TH

Abstract: Bayes factors for composite hypotheses have difficulty in encoding vague prior knowledge, as improper priors cannot be used and objective priors may be subjectively unreasonable. To address these issues I revisit the posterior Bayes factor, in which the posterior distribution from the data at hand is re-used in the Bayes factor for the same data. I argue that this is biased when calibrated against proper Bayes factors, but propose adjustments to allow interpretation on the same scale. In the important case of a regular normal model, the bias in log scale is half the number of parameters. The resulting empirical Bayes factor is closely related to the widely applicable information criterion. I develop test-based empirical Bayes factors for several standard tests and propose an extension to multiple testing closely related to the optimal discovery procedure. When only a P-value is available, an approximate empirical Bayes factor is 10p. I propose interpreting the strength of Bayes factors on a logarithmic scale with base 3.73, reflecting the sharpest distinction between weaker and stronger belief. This provides an objective framework for interpreting statistical evidence, and realises a Bayesian/frequentist compromise.

Summary

  • The paper introduces empirical Bayes factors (EBFs) to correct bias in posterior Bayes factors, enhancing objectivity in hypothesis testing.
  • It develops test-based EBFs that allow computation alongside P-values, facilitating practical use even with limited summary statistics.
  • A novel logarithmic scale of 3.73 is proposed to interpret evidence strength, supporting applications in multiple testing and complex models.

Empirical Bayes Factors for Common Hypothesis Tests

The paper by Frank Dudbridge addresses a well-known challenge in the application of Bayes factors for composite hypotheses: the difficulty in specifying appropriate prior distributions when prior information is vague or scarce. Indeed, the utilization of improper or subjective priors introduces considerable complexity and controversy in statistical analyses, as they may not accurately reflect the intended state of knowledge or may conflict with subsequent studies. To address this, the paper revisits the notion of the posterior Bayes factor, proposing empirical Bayes factors (EBFs) as a means to rectify inherent biases and align interpretations with proper Bayes factors.

Key Contributions

  1. Empirical Bayes Factors: The paper proposes EBFs by adapting posterior Bayes factors to account for biases when compared with independently derived Bayes factors. Specifically, in the context of a regular normal model, this bias is expressed as half the number of parameters on a log scale. The author delineates the relationship between EBFs and criteria such as the WAIC, showcasing how EBFs preserve the desirable properties of Bayes factors without the reliance on arbitrary priors.
  2. Test-Based EBFs: Development of test-based Bayes factors for common statistical tests allows for the computation of EBFs alongside conventional P-values. This provides a framework adaptable to scenarios where only summary test statistics are available, thereby integrating Bayesian and frequentist methods.
  3. Scale of Interpretation: A novel logarithmic scale with a base of 3.73 is introduced to interpret the strength of Bayes factors, offering a nuanced compromise between Bayesian and frequentist paradigms. This proposed scale aims to discern evidence strength without resorting to subjective scales traditionally used in Bayesian statistics.
  4. Applications to Multiple Testing: The EBF framework is extended to multiple hypothesis testing contexts, leveraging concepts similar to Storey's optimal discovery procedure to improve upon simultaneous inference. This marks a significant stride towards employing Bayesian methods in high-dimensional and complex settings.

Theoretical Insights and Implications

The work deftly navigates the theoretical bridge between Bayesian and frequentist statistics. By rectifying biases inherent in the posterior Bayes factor, Dudbridge provides a robust methodology that maintains the integrity of Bayesian inference while accommodating vague prior knowledge scenarios. Additionally, the proposition of a logarithmic scale for interpreting Bayes factors enriches the interpretational clarity and utility of Bayes factors in practical applications.

Numerical Results

Numerical calculations throughout the paper substantiate the theoretical adjustments proposed, such as the bias corrections in log-posterior likelihoods across various tests. Empirical case studies demonstrate the efficacy of the approach in real-world scenarios, such as interpreting clinical trial results and particle physics experiments, validating the practical relevance and robustness of EBFs.

Future Directions

The development of EBFs invites further exploration into their application across diverse statistical settings and data types. The proposed framework can be expanded to encompass more complex model structures and varied data distributions, enhancing its applicability in emerging fields of research within AI and machine learning. Moreover, continued exploration of the interplay between EBFs and related information criteria may yield further insights into model selection and evidence synthesis.

Conclusion

Frank Dudbridge's paper presents a methodologically sound and practically valuable approach to addressing a longstanding issue in Bayesian statistics. At its core, the introduction of empirical Bayes factors represents a pivotal step in achieving a more objective evaluation of hypotheses when faced with prior ambiguities. By integrating the strengths from both Bayesian and frequentist perspectives, this research fortifies the potential for more informed and versatile statistical analyses.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com