Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Hindsight is Not 20/20: Testing Limits on Reflective Thinking in Large Language Models (2404.09129v1)

Published 14 Apr 2024 in cs.CL

Abstract: Recent studies suggest that self-reflective prompting can significantly enhance the reasoning capabilities of LLMs. However, the use of external feedback as a stop criterion raises doubts about the true extent of LLMs' ability to emulate human-like self-reflection. In this paper, we set out to clarify these capabilities under a more stringent evaluation setting in which we disallow any kind of external feedback. Our findings under this setting show a split: while self-reflection enhances performance in TruthfulQA, it adversely affects results in HotpotQA. We conduct follow-up analyses to clarify the contributing factors in these patterns, and find that the influence of self-reflection is impacted both by reliability of accuracy in models' initial responses, and by overall question difficulty: specifically, self-reflection shows the most benefit when models are less likely to be correct initially, and when overall question difficulty is higher. We also find that self-reflection reduces tendency toward majority voting. Based on our findings, we propose guidelines for decisions on when to implement self-reflection. We release the codebase for reproducing our experiments at https://github.com/yanhong-lbh/LLM-SelfReflection-Eval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yanhong Li (14 papers)
  2. Chenghao Yang (25 papers)
  3. Allyson Ettinger (29 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com