Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MaIL: Improving Imitation Learning with Mamba (2406.08234v2)

Published 12 Jun 2024 in cs.LG and cs.RO

Abstract: This work presents Mamba Imitation Learning (MaIL), a novel imitation learning (IL) architecture that provides an alternative to state-of-the-art (SoTA) Transformer-based policies. MaIL leverages Mamba, a state-space model designed to selectively focus on key features of the data. While Transformers are highly effective in data-rich environments due to their dense attention mechanisms, they can struggle with smaller datasets, often leading to overfitting or suboptimal representation learning. In contrast, Mamba's architecture enhances representation learning efficiency by focusing on key features and reducing model complexity. This approach mitigates overfitting and enhances generalization, even when working with limited data. Extensive evaluations on the LIBERO benchmark demonstrate that MaIL consistently outperforms Transformers on all LIBERO tasks with limited data and matches their performance when the full dataset is available. Additionally, MaIL's effectiveness is validated through its superior performance in three real robot experiments. Our code is available at https://github.com/ALRhub/MaIL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xiaogang Jia (16 papers)
  2. Qian Wang (453 papers)
  3. Atalay Donat (4 papers)
  4. Bowen Xing (14 papers)
  5. Ge Li (213 papers)
  6. Hongyi Zhou (53 papers)
  7. Denis Blessing (14 papers)
  8. Rudolf Lioutikov (30 papers)
  9. Gerhard Neumann (99 papers)
  10. Onur Celik (13 papers)
Citations (9)
X Twitter Logo Streamline Icon: https://streamlinehq.com