Dice Question Streamline Icon: https://streamlinehq.com

Manifestation of Confidence in LLM Reasoning

Determine how confidence is manifested within the reasoning process of large language models by identifying and characterizing the internal signals present during token generation that reflect confidence in intermediate steps and final answers.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper discusses limitations of Test-Time Reinforcement Learning approaches that rely on majority voting as a proxy for confidence, noting the risk of reinforcing incorrect pseudo-labels. It motivates exploring other forms of prior knowledge within LLMs, particularly the token-level probability distribution, which may encode uncertainty, confidence, and decisiveness signals relevant to reasoning quality.

This open question frames the need to understand confidence at a process level, not just outcome level, as a foundation for designing intrinsic reward mechanisms that can stably guide reinforcement learning on unlabeled data.

References

While using confidence as a correctness proxy aligns with cognitive principles, a critical question remains: how is confidence actually manifested in the reasoning process of LLMs?