Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching (2110.14457v2)

Published 27 Oct 2021 in cs.LG

Abstract: Learning meaningful behaviors in the absence of reward is a difficult problem in reinforcement learning. A desirable and challenging unsupervised objective is to learn a set of diverse skills that provide a thorough coverage of the state space while being directed, i.e., reliably reaching distinct regions of the environment. In this paper, we build on the mutual information framework for skill discovery and introduce UPSIDE, which addresses the coverage-directedness trade-off in the following ways: 1) We design policies with a decoupled structure of a directed skill, trained to reach a specific region, followed by a diffusing part that induces a local coverage. 2) We optimize policies by maximizing their number under the constraint that each of them reaches distinct regions of the environment (i.e., they are sufficiently discriminable) and prove that this serves as a lower bound to the original mutual information objective. 3) Finally, we compose the learned directed skills into a growing tree that adaptively covers the environment. We illustrate in several navigation and control environments how the skills learned by UPSIDE solve sparse-reward downstream tasks better than existing baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pierre-Alexandre Kamienny (11 papers)
  2. Jean Tarbouriech (10 papers)
  3. Sylvain Lamprier (40 papers)
  4. Alessandro Lazaric (78 papers)
  5. Ludovic Denoyer (51 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.