Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding the Dark Side of LLMs' Intrinsic Self-Correction (2412.14959v1)

Published 19 Dec 2024 in cs.CL

Abstract: Intrinsic self-correction was proposed to improve LLMs' responses via feedback prompts solely based on their inherent capability. However, recent works show that LLMs' intrinsic self-correction fails without oracle labels as feedback prompts. In this paper, we aim to interpret LLMs' intrinsic self-correction for different tasks, especially for those failure cases. By including one simple task and three complex tasks with state-of-the-art (SOTA) LLMs like ChatGPT families (o1, 4o, 3.5-turbo) and Llama families (2-7B, 3-8B, and 3.1-8B), we design three interpretation methods to reveal the dark side of LLMs' intrinsic self-correction. We identify intrinsic self-correction can (1) cause LLMs to waver both intermedia and final answers and lead to prompt bias on simple factual questions; (2) introduce human-like cognitive bias on complex tasks. In light of our findings, we also provide two simple yet effective strategies for alleviation: question repeating and supervised fine-tuning with a few samples. We open-source our work at https://x-isc.info/.

Analyzing the Limitations of Intrinsic Self-Correction in LLMs

The paper "Understanding the Dark Side of LLMs' Intrinsic Self-Correction" critically examines the intrinsic self-correction capabilities of state-of-the-art LLMs, such as models from the ChatGPT and Llama families. Intrinsic self-correction—the process where LLMs attempt to rectify their responses based on internal feedback rather than external data—has been assumed to enhance model accuracy. However, this paper challenges this assumption by systematically analyzing failure cases across various tasks.

Key Findings and Methodological Approach

The research identifies that intrinsic self-correction can lead to decreased performance rather than improvements, introducing cognitive biases and prompting issues:

  1. Task Performance and Self-Correction Failures: The paper evaluates multiple tasks, including simple factual questions and more complex tasks like decision-making, reasoning, and programming. In each case, intrinsic self-correction did not uniformly enhance performance. For instance, Llama-3.1-8B experienced a substantial 20.4% decline in accuracy for Yes/No questions, with 58.8% of correct answers overturned during self-correction processes.
  2. Interpretation Through Error Analysis: The authors employed three interpretability methods to understand the self-correction failures:
    • Mechanistic Interpretability: This approach showed that LLMs waver between intermediate answers, impacting the final output.
    • Token-Level Interpretability: It revealed prompt biases, where models favor the reformulation prompt over the original question.
    • Human-Like Cognitive Bias: The paper identified patterns akin to human cognitive biases—such as overthinking, cognitive overload, and perfectionism—that manifest during complex task resolutions.
  3. Strategies for Alleviating Failures: The paper proposes two interventions:
    • Question Repeating: Attaching the original question at the end of the reinforcement prompt, which reduced prompt bias and improved alignment with the task objective.
    • Supervised Fine-Tuning (SFT): Using minimal, task-focused samples to adjust model behavior rather than expanding its knowledge base led to improved outcomes, including transferring improvements to complex task settings.

Implications and Future Directions

The findings presented indicate critical pitfalls in relying solely on intrinsic self-correction as a mechanism for improving LLM reliability. The discovery that models can easily oscillate in their decision-making process due to internal biases and prompt interpretations necessitates a reevaluation of LLM development strategies. Future developments in AI should focus on refining such self-corrective processes with an emphasis on behavioral adjustments rather than knowledge expansion alone.

The selective application of the proposed mitigation strategies shows promise in addressing specific self-correction limits, suggesting that further granular tuning of LLMs can extend their operational accuracy across diverse contexts. Researchers are encouraged to build on this foundational analysis to explore additional methods and frameworks that harness interpretability for the systematic improvement of LLMs' self-correction routines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qingjie Zhang (5 papers)
  2. Han Qiu (60 papers)
  3. Di Wang (407 papers)
  4. Haoting Qian (1 paper)
  5. Yiming Li (199 papers)
  6. Tianwei Zhang (199 papers)
  7. Minlie Huang (225 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com