Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought (2305.11499v2)

Published 19 May 2023 in cs.CL
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought

Abstract: LLMs have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use coarse-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCoT (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and rectifying factual inconsistency in LLMs, generated solutions. To detect factual inconsistency, RCoT first asks LLMs to reconstruct the problem based on generated solutions. Then fine-grained comparisons between the original problem and the reconstructed problem expose the factual inconsistency in the original solutions. To rectify the solution, RCoT formulates detected factual inconsistency into fine-grained feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCoT over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT reaches 94.6% accuracy on GSM8K), encouraging the community to further explore the fine-grained feedback generation methods.

The paper "RCoT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought" introduces a novel methodology aimed at improving the reasoning abilities of LLMs, particularly in arithmetic tasks. Despite the potential of LLMs and techniques like Chain-of-Thought (CoT) prompting, factual consistency remains a significant challenge, as models can conditionally overlook, hallucinate, or misinterpret questions and conditions during iterative reasoning.

Key Contributions and Methodology:

  1. RCoT Framework: The authors propose the Reversing Chain-of-Thought (RCoT) method, which improves factual consistency by enabling LLMs to detect and rectify errors in their generated reasoning chains. RCoT reconstructs the original problem from the solution generated by the LLM. Differences between the original and reconstructed problems highlight factual inconsistencies such as hallucinations, overlookings, and misinterpretations. Fine-grained feedback, derived from these discrepancies, guides LLMs to correct their reasoning processes.
  2. Problem Reconstruction: In RCoT, an LLM is first prompted to reconstruct the problem based on the rationale it produced initially. This serves to assess the internal consistency and coherence of the reasoning chain.
  3. Fine-Grained Comparison: The method conducts an in-depth comparison between conditions and conclusions in the original and reconstructed problems, identifying specific instances of factual inconsistency.
  4. Rectification Process: Detected factual inconsistencies are articulated into explicit feedback that guides the LLM to revise its reasoning approach. This process not only improves the solution's accuracy but also enhances interpretability by explicitly identifying reasoning errors.
  5. Experimental Validation: The authors performed comprehensive experiments across seven arithmetic datasets, including GSM8k, AQuA, SVAMP, and others. The RCoT method demonstrated improved performance over standard CoT and other strategies like Self-Consistency and Self-Refine, indicating the method's efficacy in mitigating factual inconsistencies. Notably, RCoT facilitates dramatic improvements when fine-grained, human-crafted feedback is incorporated; for example, ChatGPT achieved a 94.6% accuracy on the GSM8K dataset with such feedback.
  6. Comparison to Baselines: RCoT showed superior performance and efficiency compared to methods like Self-Consistency, which involves multiple solution trials, highlighting RCoT's capacity for improving solutions at a reduced computational cost.

Overall, the RCoT approach provides a structured methodology to enhance the factual reliability of reasoning tasks in LLMs, emphasizing the role of fine-grained feedback in error rectification. The findings encourage further exploration into automated fine-grained feedback generation for improving complex reasoning tasks in natural language processing. Future work may extend this method to other forms of reasoning tasks and seek to reduce inference times.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianci Xue (5 papers)
  2. Ziqi Wang (92 papers)
  3. Zhenhailong Wang (17 papers)
  4. Chi Han (30 papers)
  5. Pengfei Yu (20 papers)
  6. Heng Ji (266 papers)
Citations (26)
X Twitter Logo Streamline Icon: https://streamlinehq.com