Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discrete Factorial Representations as an Abstraction for Goal Conditioned Reinforcement Learning (2211.00247v1)

Published 1 Nov 2022 in cs.LG and cs.AI

Abstract: Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives. How to \textit{specify} and \textit{ground} these goals in such a way that we can both reliably reach goals during training as well as generalize to new goals during evaluation remains an open area of research. Defining goals in the space of noisy and high-dimensional sensory inputs poses a challenge for training goal-conditioned agents, or even for generalization to novel goals. We propose to address this by learning factorial representations of goals and processing the resulting representation via a discretization bottleneck, for coarser goal specification, through an approach we call DGRL. We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups, by experimentally evaluating this method on tasks ranging from maze environments to complex robotic navigation and manipulation. Additionally, we prove a theorem lower-bounding the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive combinatorial structure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Riashat Islam (30 papers)
  2. Hongyu Zang (12 papers)
  3. Anirudh Goyal (93 papers)
  4. Alex Lamb (45 papers)
  5. Kenji Kawaguchi (147 papers)
  6. Xin Li (980 papers)
  7. Romain Laroche (36 papers)
  8. Yoshua Bengio (601 papers)
  9. Remi Tachet des Combes (23 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.