Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlocking Pixels for Reinforcement Learning via Implicit Attention (2102.04353v5)

Published 8 Feb 2021 in cs.LG, cs.AI, cs.CV, and cs.RO

Abstract: There has recently been significant interest in training reinforcement learning (RL) agents in vision-based environments. This poses many challenges, such as high dimensionality and the potential for observational overfitting through spurious correlations. A promising approach to solve both of these problems is an attention bottleneck, which provides a simple and effective framework for learning high performing policies, even in the presence of distractions. However, due to poor scalability of attention architectures, these methods cannot be applied beyond low resolution visual inputs, using large patches (thus small attention matrices). In this paper we make use of new efficient attention algorithms, recently shown to be highly effective for Transformers, and demonstrate that these techniques can be successfully adopted for the RL setting. This allows our attention-based controllers to scale to larger visual inputs, and facilitate the use of smaller patches, even individual pixels, improving generalization. We show this on a range of tasks from the Distracting Control Suite to vision-based quadruped robots locomotion. We provide rigorous theoretical analysis of the proposed algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Krzysztof Marcin Choromanski (3 papers)
  2. Deepali Jain (26 papers)
  3. Wenhao Yu (139 papers)
  4. Xingyou Song (32 papers)
  5. Jack Parker-Holder (47 papers)
  6. Tingnan Zhang (53 papers)
  7. Valerii Likhosherstov (25 papers)
  8. Aldo Pacchiano (72 papers)
  9. Anirban Santara (13 papers)
  10. Yunhao Tang (63 papers)
  11. Jie Tan (85 papers)
  12. Adrian Weller (150 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.