Dice Question Streamline Icon: https://streamlinehq.com

Does selective error detection extend to favorable results?

Establish whether researchers’ probability of detecting coding errors is systematically lower when the errors produce favorable outcomes—defined as results more likely to be published or that support researchers’ hypotheses—thereby determining if selective error detection depends on result favorability in addition to whether results are expected or unexpected.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper reports experimental evidence from a randomized coding task embedded in the World Bank’s recruitment process showing that individuals are significantly more likely to detect coding errors when those errors lead to unexpected results. The manipulated error involved failing to treat the value 99 as a missing outcome in regression analyses, and the randomized design ensured that the same error could yield either expected or unexpected findings.

In the Discussion, the authors propose that this mechanism might also apply to favorable results—i.e., outcomes perceived as publishable or supportive of hypotheses—suggesting a potential pathway for systematic bias if researchers are less inclined to scrutinize coding when it produces desirable findings. Clarifying whether result favorability influences error detection would have implications for research practices, particularly for placebo tests and broader concerns about replication, transparency, and publication bias.

References

While our experimental design focuses on whether coding errors lead to expected or unexpected results, a natural conjecture is that this mechanism may also extend to favorable results---that is, results that researchers view as more likely to be published or that support their hypotheses.

There must be an error here! Experimental evidence on coding errors' biases (2508.20069 - Ferman et al., 27 Aug 2025) in Discussion