Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compositional Transfer in Hierarchical Reinforcement Learning (1906.11228v3)

Published 26 Jun 2019 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: The successful application of general reinforcement learning algorithms to real-world robotics applications is often limited by their high data requirements. We introduce Regularized Hierarchical Policy Optimization (RHPO) to improve data-efficiency for domains with multiple dominant tasks and ultimately reduce required platform time. To this end, we employ compositional inductive biases on multiple levels and corresponding mechanisms for sharing off-policy transition data across low-level controllers and tasks as well as scheduling of tasks. The presented algorithm enables stable and fast learning for complex, real-world domains in the parallel multitask and sequential transfer case. We show that the investigated types of hierarchy enable positive transfer while partially mitigating negative interference and evaluate the benefits of additional incentives for efficient, compositional task solutions in single task domains. Finally, we demonstrate substantial data-efficiency and final performance gains over competitive baselines in a week-long, physical robot stacking experiment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Markus Wulfmeier (46 papers)
  2. Abbas Abdolmaleki (38 papers)
  3. Roland Hafner (23 papers)
  4. Jost Tobias Springenberg (48 papers)
  5. Michael Neunert (29 papers)
  6. Tim Hertweck (14 papers)
  7. Thomas Lampe (25 papers)
  8. Noah Siegel (10 papers)
  9. Nicolas Heess (139 papers)
  10. Martin Riedmiller (64 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.