Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Markov to Laplace: How Mamba In-Context Learns Markov Chains (2502.10178v1)

Published 14 Feb 2025 in cs.LG, cs.AI, cs.IT, and math.IT

Abstract: While transformer-based LLMs have driven the AI revolution thus far, their computational complexity has spurred growing interest in viable alternatives, such as structured state space sequence models (SSMs) and Selective SSMs. Among these, Mamba (S6) and its variant Mamba-2 have shown remarkable inference speed ups over transformers while achieving comparable or superior performance on complex LLMing tasks. However, despite these architectural innovations and empirical successes, the fundamental learning capabilities of Mamba remain poorly understood. In this paper, we address this gap by studying in-context learning (ICL) on Markov chains and uncovering a surprising phenomenon: unlike transformers, even a single-layer Mamba efficiently learns the in-context Laplacian smoothing estimator, which is both Bayes and minimax optimal, for all Markovian orders. To explain this, we theoretically characterize the representation capacity of Mamba and reveal the fundamental role of convolution in enabling it to represent the optimal Laplacian smoothing. These theoretical insights align strongly with empirical results and, to the best of our knowledge, represent the first formal connection between Mamba and optimal statistical estimators. Finally, we outline promising research directions inspired by these findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Marco Bondaschi (11 papers)
  2. Nived Rajaraman (21 papers)
  3. Xiuying Wei (10 papers)
  4. Kannan Ramchandran (129 papers)
  5. Razvan Pascanu (138 papers)
  6. Caglar Gulcehre (71 papers)
  7. Michael Gastpar (99 papers)
  8. Ashok Vardhan Makkuva (15 papers)