Detecting p-hacking
Abstract: We theoretically analyze the problem of testing for $p$-hacking based on distributions of $p$-values across multiple studies. We provide general results for when such distributions have testable restrictions (are non-increasing) under the null of no $p$-hacking. We find novel additional testable restrictions for $p$-values based on $t$-tests. Specifically, the shape of the power functions results in both complete monotonicity as well as bounds on the distribution of $p$-values. These testable restrictions result in more powerful tests for the null hypothesis of no $p$-hacking. When there is also publication bias, our tests are joint tests for $p$-hacking and publication bias. A reanalysis of two prominent datasets shows the usefulness of our new tests.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.