Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

URLB: Unsupervised Reinforcement Learning Benchmark (2110.15191v1)

Published 28 Oct 2021 in cs.LG, cs.AI, and cs.RO

Abstract: Deep Reinforcement Learning (RL) has emerged as a powerful paradigm to solve a range of complex yet specific control tasks. Yet training generalist agents that can quickly adapt to new tasks remains an outstanding challenge. Recent advances in unsupervised RL have shown that pre-training RL agents with self-supervised intrinsic rewards can result in efficient adaptation. However, these algorithms have been hard to compare and develop due to the lack of a unified benchmark. To this end, we introduce the Unsupervised Reinforcement Learning Benchmark (URLB). URLB consists of two phases: reward-free pre-training and downstream task adaptation with extrinsic rewards. Building on the DeepMind Control Suite, we provide twelve continuous control tasks from three domains for evaluation and open-source code for eight leading unsupervised RL methods. We find that the implemented baselines make progress but are not able to solve URLB and propose directions for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Michael Laskin (20 papers)
  2. Denis Yarats (20 papers)
  3. Hao Liu (497 papers)
  4. Kimin Lee (69 papers)
  5. Albert Zhan (5 papers)
  6. Kevin Lu (23 papers)
  7. Catherine Cang (2 papers)
  8. Lerrel Pinto (81 papers)
  9. Pieter Abbeel (372 papers)
Citations (122)

Summary

We haven't generated a summary for this paper yet.