Resubmission Bias in Peer Review
- Resubmission bias is the tendency to lower evaluation scores solely based on a manuscript's disclosed rejection history, independent of its actual content.
- Empirical studies indicate that a simple rejection signal can reduce scores by nearly one point on a 10-point scale, significantly lowering acceptance probabilities.
- Mitigation strategies such as delayed disclosure, targeted reviewer training, and selective access to review history are being explored to enhance fairness in peer review.
Resubmission bias denotes the systematic tendency of academic reviewers and selection systems to evaluate a research manuscript more negatively when informed of its prior rejection at another venue, irrespective of its substantive merit. This phenomenon has gained prominence as flagship machine learning conferences, such as NeurIPS and ICLR, introduce policies requiring or facilitating the disclosure or public release of a submission’s review history. Both experimental and modeling evidence demonstrates that disclosure of prior rejections, whether via explicit “resubmission signals” or open review records, can measurably reduce evaluation scores and acceptance probabilities for resubmitted work, with important implications for fairness, innovation, and the overall efficiency of the peer review process (Stelmakh et al., 2020, Zhang et al., 2023, Rao et al., 28 Nov 2025, Francois, 2015).
1. Formal Definition and Conceptual Scope
Within the peer review context, resubmission bias refers to the tendency of reviewers to lower their evaluation of a manuscript simply because they are aware it was previously rejected at a similar venue (Stelmakh et al., 2020). This can occur regardless of any objective change to the scientific content, and is typically triggered by explicit disclosure policies, “Harvard-style” rejection history flags, or public review repositories. The effect is distinct from general reviewer arbitrariness or noise; it is a shift in evaluation conditional specifically on prior negative outcome signals (Rao et al., 28 Nov 2025). The bias is known to operate both consciously and unconsciously—anchoring reviewers’ judgements, introducing confirmatory bias, and distorting the intended content-based focus of technical peer evaluation.
2. Experimental and Empirical Evidence
Controlled trials systematically quantify resubmission bias in peer review. A randomized controlled experiment with 133 novice reviewers (master’s and early PhDs) at top US institutions found that a simple statement—“this submission was rejected at NeurIPS 2019”—embedded in an author checklist led to a mean decrease of Δ = –0.78 points (95% CI: [–1.30, –0.24]) on a 10-point overall score scale. Effect sizes for domain-specific criteria (originality, quality, clarity, significance) similarly revealed statistically significant negative shifts, with the largest scaled effect for perceived quality (d ≈ –0.23). Reviewer self-reported confidence was unaffected, suggesting that bias operates at the level of evaluation rather than perceived expertise (Stelmakh et al., 2020).
Survey research corroborates the salience and consequences of resubmission bias. In a paper of 2,385 machine learning community members regarding open review policies, 41.6% ranked resubmission bias as the single most important negative impact of full disclosure of rejected submissions—a plurality exceeding all other concerns by a wide margin. 28.4% reported refraining from submitting to fully open venues at least once; among these, 57% cited resubmission bias as a reason (Rao et al., 28 Nov 2025). These results underline both the pervasiveness of perceived resubmission bias and its capacity to shape submission behavior and venue choice.
3. Theoretical Models and Quantitative Metrics
Theoretical frameworks further illuminate the mechanisms and structural effects of resubmission bias. In Bayesian analyses of “arbitrariness” (the conditional probability of a previously accepted paper being rejected on re-review), the conditional rejection probability for a resubmission is defined as , where is the probability a paper meeting basic quality criteria is accepted, is the observed overall acceptance rate, and is the fraction of submissions passing the minimal quality bar (Francois, 2015). For the NeurIPS 2014 experiment, the estimated arbitrariness (resubmission bias) was 61% (95% CI: 43%–73%), meaning a majority of previously accepted-quality papers would be rejected if resubmitted and evaluated anew.
Stackelberg-game models formalize the equilibrium interplay between conference policy and author strategy, identifying a “resubmission gap” Δ = τ* – θ: the difference between the explicit acceptance threshold τ needed to account for repeated resubmissions (by patient and persistent authors) and the de facto threshold θ* required to achieve a desired program quality. Analytical results show that this gap grows with reviewer noise (σ), conference prestige (V), and author patience (δ), and that attempts to reduce review redundancy without accounting for resubmission bias can lead to misaligned acceptance standards or erosions in perceived fairness (Zhang et al., 2023).
4. Quantitative Outcomes and Community Perceptions
The following table summarizes key quantitative outcomes from the cited studies, focusing on explicit experimental measures of resubmission bias in reviewer scoring and survey-based measures of perceived impact:
| Item | Quantitative Outcome | Source |
|---|---|---|
| Overall reviewer score Δ | –0.78 (95% CI: [–1.30, –0.24]) | (Stelmakh et al., 2020) |
| “Quality” criterion Δ | –0.46 (95% CI: [–0.69, –0.23]) | (Stelmakh et al., 2020) |
| Resubmission arbitration a | 0.61 (95% CI: [0.43, 0.73]) | (Francois, 2015) |
| % ranking bias as top concern | 41.6% (n=2,299 respondents) | (Rao et al., 28 Nov 2025) |
| % refraining from open venues | 28.4% (n=2,187 authors, 57% citing bias) | (Rao et al., 28 Nov 2025) |
The empirical magnitude of resubmission bias is sufficient to halve a paper’s acceptance probability if it is near the threshold for program inclusion. A shift of just one point in a reviewer’s score can substantially change acceptance odds, amplifying the risks of policy-induced bias (Stelmakh et al., 2020).
5. Causes and Mechanisms
Observed resubmission bias is consistent with several canonical cognitive biases: anchoring (with prior rejection acting as an anchor on perceived merit), conformity (desire to agree with prior panels), and confirmatory bias (tendency to search for negative evidence when alert to prior negative decisions). These effects are especially pronounced among novice reviewers, who may have less confidence in their own judgments and be more susceptible to explicit cues of “rejection history.” Resubmission bias also emerges structurally from peer review system design, especially in the presence of noisy evaluators, multiple rounds, and policies that explicitly surface historical outcomes (Stelmakh et al., 2020, Zhang et al., 2023).
6. Systemic Impact and Policy Implications
Resubmission bias produces an array of systemic consequences for both authors and the research ecosystem. For authors, it imposes a cumulative penalty for early rejections—especially on innovative work or submissions from less well-networked groups—that can persist across venues even after substantive revision. This may deter risk-taking, encourage “safe” incrementalism, and disincentivize participation in fully open venues. For conferences and the field at large, resubmission bias can inflate review burden (as borderline papers re-enter the system multiple times), lower the aggregate quality threshold, and undermine perceptions of fairness (Zhang et al., 2023, Rao et al., 28 Nov 2025).
Community resistance is evident in survey evidence: while over 80% support release of reviews for accepted papers, just 27.1% support release of reviews and manuscripts for rejected papers—the locus of resubmission bias (Rao et al., 28 Nov 2025).
7. Mitigation Strategies and Future Directions
Several mitigation strategies have been proposed to reduce or counteract resubmission bias:
- Delayed Disclosure: Withhold “resubmission” signals from reviewers until after initial scoring; allow past review to inform only rebuttal or discussion phases (Stelmakh et al., 2020).
- Reviewer Training: Explicitly incorporate warnings about resubmission bias into novice reviewer training, emphasizing evaluation on evidence rather than provenance (Stelmakh et al., 2020).
- Selective Access: Restrict full prior-review disclosure to senior meta-reviewers or area chairs, keeping frontline reviewers blind to outcome history (Stelmakh et al., 2020).
- Statistical Calibration: If prior outcome information is unavoidable, algorithmically adjust scores for identified bias magnitude (e.g., add measured Δ to level the playing field) (Stelmakh et al., 2020).
- Institutional Memory: Limit resubmission cycles or require submission of prior reviews to reduce redundant reviewing and clarify the improvement trajectory (Zhang et al., 2023).
- Partial Openness: Release reviews for accepted papers only or with author opt-in for rejected ones, balancing transparency with author autonomy (Rao et al., 28 Nov 2025).
- “Fresh Review” Option: Permit authors to request blind reviews without exposure to the prior record when resubmitting (Rao et al., 28 Nov 2025).
A composite approach—combining partial openness, reviewer calibration, and policy trial with experimental measurement—is recommended to sustain the benefits of transparency while curbing the negative externalities of resubmission bias.