Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Check: Unleashing Potentials for Self-Correction in Large Language Models (2402.13035v3)

Published 20 Feb 2024 in cs.CL and cs.AI

Abstract: Self-correction has achieved impressive results in enhancing the style and security of the generated output from LLMs. However, recent studies suggest that self-correction might be limited or even counterproductive in reasoning tasks due to LLMs' difficulties in identifying logical mistakes. In this paper, we aim to enhance the self-checking capabilities of LLMs by constructing training data for checking tasks. Specifically, we apply the Chain of Thought (CoT) methodology to self-checking tasks, utilizing fine-grained step-level analyses and explanations to assess the correctness of reasoning paths. We propose a specialized checking format called "Step CoT Check". Following this format, we construct a checking-correction dataset that includes detailed step-by-step analysis and checking. Then we fine-tune LLMs to enhance their error detection and correction abilities. Our experiments demonstrate that fine-tuning with the "Step CoT Check" format significantly improves the self-checking and self-correction abilities of LLMs across multiple benchmarks. This approach outperforms other formats, especially in locating the incorrect position, with greater benefits observed in larger models. For reproducibility, all the datasets and code are provided in https://github.com/bammt/Learn-to-check.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Che Zhang (5 papers)
  2. Zhenyang Xiao (9 papers)
  3. Chengcheng Han (83 papers)
  4. Yixin Lian (7 papers)
  5. Yuejian Fang (18 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com