Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Divide-and-Conquer Reinforcement Learning (1711.09874v2)

Published 27 Nov 2017 in cs.LG and cs.RO

Abstract: Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at http://bit.ly/dnc-rl

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dibya Ghosh (20 papers)
  2. Avi Singh (21 papers)
  3. Aravind Rajeswaran (42 papers)
  4. Vikash Kumar (70 papers)
  5. Sergey Levine (531 papers)
Citations (119)

Summary

We haven't generated a summary for this paper yet.