Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning (2204.13060v3)

Published 27 Apr 2022 in cs.LG

Abstract: Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems. Traditionally in goal-conditioned RL, an agent is provided with the exact goal they intend to reach. However, it is often not realistic to know the configuration of the goal before performing a task. A more scalable framework would allow us to provide the agent with an example of an analogous task, and have the agent then infer what the goal should be for its current state. We propose a new form of state abstraction called goal-conditioned bisimulation that captures functional equivariance, allowing for the reuse of skills to achieve new goals. We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks. Further, we prove that this learned representation is sufficient not only for goal conditioned tasks, but is amenable to any downstream task described by a state-only reward function. Videos can be found at https://sites.google.com/view/gc-bisimulation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Philippe Hansen-Estruch (10 papers)
  2. Amy Zhang (99 papers)
  3. Ashvin Nair (20 papers)
  4. Patrick Yin (8 papers)
  5. Sergey Levine (531 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.