Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners (2405.18915v1)

Published 29 May 2024 in cs.CL and cs.AI

Abstract: LLMs suffer from serious unfaithful chain-of-thought (CoT) issues. Previous work attempts to measure and explain it but lacks in-depth analysis within CoTs and does not consider the interactions among all reasoning components jointly. In this paper, we first study the CoT faithfulness issue at the granularity of CoT steps, identify two reasoning paradigms: centralized reasoning and distributed reasoning, and find their relationship with faithfulness. Subsequently, we conduct a joint analysis of the causal relevance among the context, CoT, and answer during reasoning. The result proves that, when the LLM predicts answers, it can recall correct information missing in the CoT from the context, leading to unfaithfulness issues. Finally, we propose the inferential bridging method to mitigate this issue, in which we use the attribution method to recall information as hints for CoT generation and filter out noisy CoTs based on their semantic consistency and attribution scores. Extensive experiments demonstrate that our approach effectively alleviates the unfaithful CoT problem.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiachun Li (17 papers)
  2. Pengfei Cao (39 papers)
  3. Yubo Chen (58 papers)
  4. Kang Liu (207 papers)
  5. Jun Zhao (469 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets