Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task (2402.11917v3)

Published 19 Feb 2024 in cs.LG

Abstract: Transformers demonstrate impressive performance on a range of reasoning benchmarks. To evaluate the degree to which these abilities are a result of actual reasoning, existing work has focused on developing sophisticated benchmarks for behavioral studies. However, these studies do not provide insights into the internal mechanisms driving the observed capabilities. To improve our understanding of the internal mechanisms of transformers, we present a comprehensive mechanistic analysis of a transformer trained on a synthetic reasoning task. We identify a set of interpretable mechanisms the model uses to solve the task, and validate our findings using correlational and causal evidence. Our results suggest that it implements a depth-bounded recurrent mechanisms that operates in parallel and stores intermediate results in selected token positions. We anticipate that the motifs we identified in our synthetic setting can provide valuable insights into the broader operating principles of transformers and thus provide a basis for understanding more complex models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jannik Brinkmann (9 papers)
  2. Abhay Sheshadri (5 papers)
  3. Victor Levoso (1 paper)
  4. Paul Swoboda (35 papers)
  5. Christian Bartelt (29 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.