Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 149 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Survivors, Complainers, and Borderliners: Upward Bias in Online Discussions of Academic Conference Reviews (2509.16831v1)

Published 20 Sep 2025 in cs.SI and cs.DL

Abstract: Online discussion platforms, such as community Q&A sites and forums, have become important hubs where academic conference authors share and seek information about the peer review process and outcomes. However, these discussions involve only a subset of all submissions, raising concerns about the representativeness of the self-reported review scores. In this paper, we conduct a systematic study comparing the review score distributions of self-reported submissions in online discussions (based on data collected from Zhihu and Reddit) with those of all submissions. We reveal a consistent upward bias: the score distribution of self-reported samples is shifted upward relative to the population score distribution, with this difference statistically significant in most cases. Our analysis identifies three distinct contributors to this bias: (1) survivors, authors of accepted papers who are more likely to share good results than those of rejected papers who tend to conceal bad ones; (2) complainers, authors of high-scoring rejected papers who are more likely to voice complaints about the peer review process or outcomes than those of low scores; and (3) borderliners, authors with borderline scores who face greater uncertainty prior to decision announcements and are more likely to seek advice during the rebuttal period. These findings have important implications for how information seekers should interpret online discussions of academic conference reviews.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 7 tweets and received 19 likes.

Upgrade to Pro to view all of the tweets about this paper: