Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Equivariant Offline Reinforcement Learning (2406.13961v1)

Published 20 Jun 2024 in cs.LG and cs.RO

Abstract: Sample efficiency is critical when applying learning-based methods to robotic manipulation due to the high cost of collecting expert demonstrations and the challenges of on-robot policy learning through online Reinforcement Learning (RL). Offline RL addresses this issue by enabling policy learning from an offline dataset collected using any behavioral policy, regardless of its quality. However, recent advancements in offline RL have predominantly focused on learning from large datasets. Given that many robotic manipulation tasks can be formulated as rotation-symmetric problems, we investigate the use of $SO(2)$-equivariant neural networks for offline RL with a limited number of demonstrations. Our experimental results show that equivariant versions of Conservative Q-Learning (CQL) and Implicit Q-Learning (IQL) outperform their non-equivariant counterparts. We provide empirical evidence demonstrating how equivariance improves offline learning algorithms in the low-data regime.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Arsh Tangri (4 papers)
  2. Ondrej Biza (19 papers)
  3. Dian Wang (34 papers)
  4. David Klee (9 papers)
  5. Owen Howell (12 papers)
  6. Robert Platt (70 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.