Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Small Language Models Need Strong Verifiers to Self-Correct Reasoning (2404.17140v2)

Published 26 Apr 2024 in cs.CL

Abstract: Self-correction has emerged as a promising solution to boost the reasoning performance of LLMs, where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether small (<= 13B) LLMs (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs. We propose a novel pipeline that prompts smaller LMs to collect self-correction data that supports the training of self-refinement abilities. First, we leverage correct solutions to guide the model in critiquing their incorrect responses. Second, the generated critiques, after filtering, are used for supervised fine-tuning of the self-correcting reasoner through solution refinement. Our experimental results show improved self-correction abilities of two models on five datasets spanning math and commonsense reasoning, with notable performance gains when paired with a strong GPT-4-based verifier, though limitations are identified when using a weak self-verifier for determining when to correct.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yunxiang Zhang (22 papers)
  2. Muhammad Khalifa (24 papers)
  3. Lajanugen Logeswaran (30 papers)
  4. Jaekyeom Kim (12 papers)
  5. Moontae Lee (54 papers)
  6. Honglak Lee (174 papers)
  7. Lu Wang (329 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com