Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data (2406.17768v3)

Published 25 Jun 2024 in cs.RO, cs.AI, and cs.LG

Abstract: Most reinforcement learning (RL) methods focus on learning optimal policies over low-level action spaces. While these methods can perform well in their training environments, they lack the flexibility to transfer to new tasks. Instead, RL agents that can act over useful, temporally extended skills rather than low-level actions can learn new tasks more easily. Prior work in skill-based RL either requires expert supervision to define useful skills, which is hard to scale, or learns a skill-space from offline data with heuristics that limit the adaptability of the skills, making them difficult to transfer during downstream RL. Our approach, EXTRACT, instead utilizes pre-trained vision LLMs to extract a discrete set of semantically meaningful skills from offline data, each of which is parameterized by continuous arguments, without human supervision. This skill parameterization allows robots to learn new tasks by only needing to learn when to select a specific skill and how to modify its arguments for the specific task. We demonstrate through experiments in sparse-reward, image-based, robot manipulation environments that EXTRACT can more quickly learn new tasks than prior works, with major gains in sample efficiency and performance over prior skill-based RL. Website at https://www.jessezhang.net/projects/extract/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jesse Zhang (22 papers)
  2. Minho Heo (5 papers)
  3. Zuxin Liu (43 papers)
  4. Yao Liu (116 papers)
  5. Rasool Fakoor (26 papers)
  6. Erdem Biyik (9 papers)
  7. Joseph J Lim (4 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.