Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Local to Global: Learning Dynamics and Effect of Initialization for Transformers (2406.03072v2)

Published 5 Jun 2024 in cs.LG, cs.IT, math.IT, and stat.ML

Abstract: In recent years, transformer-based models have revolutionized deep learning, particularly in sequence modeling. To better understand this phenomenon, there is a growing interest in using Markov input processes to study transformers. However, our current understanding in this regard remains limited with many fundamental questions about how transformers learn Markov chains still unanswered. In this paper, we address this by focusing on first-order Markov chains and single-layer transformers, providing a comprehensive characterization of the learning dynamics in this context. Specifically, we prove that transformer parameters trained on next-token prediction loss can either converge to global or local minima, contingent on the initialization and the Markovian data properties, and we characterize the precise conditions under which this occurs. To the best of our knowledge, this is the first result of its kind highlighting the role of initialization. We further demonstrate that our theoretical findings are corroborated by empirical evidence. Based on these insights, we provide guidelines for the initialization of transformer parameters and demonstrate their effectiveness. Finally, we outline several open problems in this arena. Code is available at: https://github.com/Bond1995/Markov.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ashok Vardhan Makkuva (15 papers)
  2. Marco Bondaschi (11 papers)
  3. Chanakya Ekbote (9 papers)
  4. Adway Girish (5 papers)
  5. Alliot Nagle (6 papers)
  6. Hyeji Kim (42 papers)
  7. Michael Gastpar (99 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.