Small Language Models Need Strong Verifiers to Self-Correct Reasoning (2404.17140v2)
Abstract: Self-correction has emerged as a promising solution to boost the reasoning performance of LLMs, where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether small (<= 13B) LLMs (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs. We propose a novel pipeline that prompts smaller LMs to collect self-correction data that supports the training of self-refinement abilities. First, we leverage correct solutions to guide the model in critiquing their incorrect responses. Second, the generated critiques, after filtering, are used for supervised fine-tuning of the self-correcting reasoner through solution refinement. Our experimental results show improved self-correction abilities of two models on five datasets spanning math and commonsense reasoning, with notable performance gains when paired with a strong GPT-4-based verifier, though limitations are identified when using a weak self-verifier for determining when to correct.
- Yunxiang Zhang (22 papers)
- Muhammad Khalifa (24 papers)
- Lajanugen Logeswaran (30 papers)
- Jaekyeom Kim (12 papers)
- Moontae Lee (54 papers)
- Honglak Lee (174 papers)
- Lu Wang (329 papers)