Uncalibrated Reasoning: GRPO Induces Overconfidence for Stochastic Outcomes (2508.11800v1)
Abstract: Reinforcement learning (RL) has proven remarkably effective at improving the accuracy of LLMs in verifiable and deterministic domains like mathematics. Here, we examine if current RL methods are also effective at optimizing LLMs in verifiable domains with stochastic outcomes, like scientific experiments. Through applications to synthetic data and real-world biological experiments, we demonstrate that Group Relative Policy Optimization (GRPO) induces overconfident probability predictions for binary stochastic outcomes, while Proximal Policy Optimization (PPO) and REINFORCE Leave-One-Out (RLOO) yield well-calibrated models. We show that removing group standard normalization in GRPO fixes its miscalibration and provide a theoretical explanation for why normalization causes overconfidence. Our results provide new evidence against the use of standard normalization in GRPO and help pave the way for applications of RL for reasoning LLMs beyond deterministic domains.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.