Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models (2310.14491v1)

Published 23 Oct 2023 in cs.CL

Abstract: Recent work has shown that LLMs (LMs) have strong multi-step (i.e., procedural) reasoning capabilities. However, it is unclear whether LMs perform these tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism. In this paper, we try to answer this question by exploring a mechanistic interpretation of LMs for multi-step reasoning tasks. Concretely, we hypothesize that the LM implicitly embeds a reasoning tree resembling the correct reasoning process within it. We test this hypothesis by introducing a new probing approach (called MechanisticProbe) that recovers the reasoning tree from the model's attention patterns. We use our probe to analyze two LMs: GPT-2 on a synthetic task (k-th smallest element), and LLaMA on two simple language-based reasoning tasks (ProofWriter & AI2 Reasoning Challenge). We show that MechanisticProbe is able to detect the information of the reasoning tree from the model's attentions for most examples, suggesting that the LM indeed is going through a process of multi-step reasoning within its architecture in many cases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yifan Hou (27 papers)
  2. Jiaoda Li (8 papers)
  3. Yu Fei (4 papers)
  4. Alessandro Stolfo (12 papers)
  5. Wangchunshu Zhou (73 papers)
  6. Guangtao Zeng (14 papers)
  7. Antoine Bosselut (85 papers)
  8. Mrinmaya Sachan (124 papers)
Citations (26)