Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Theoretically Guaranteed Policy Improvement Distilled from Model-Based Planning (2307.12933v1)

Published 24 Jul 2023 in cs.AI

Abstract: Model-based reinforcement learning (RL) has demonstrated remarkable successes on a range of continuous control tasks due to its high sample efficiency. To save the computation cost of conducting planning online, recent practices tend to distill optimized action sequences into an RL policy during the training phase. Although the distillation can incorporate both the foresight of planning and the exploration ability of RL policies, the theoretical understanding of these methods is yet unclear. In this paper, we extend the policy improvement step of Soft Actor-Critic (SAC) by developing an approach to distill from model-based planning to the policy. We then demonstrate that such an approach of policy improvement has a theoretical guarantee of monotonic improvement and convergence to the maximum value defined in SAC. We discuss effective design choices and implement our theory as a practical algorithm -- Model-based Planning Distilled to Policy (MPDP) -- that updates the policy jointly over multiple future time steps. Extensive experiments show that MPDP achieves better sample efficiency and asymptotic performance than both model-free and model-based planning algorithms on six continuous control benchmark tasks in MuJoCo.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chuming Li (19 papers)
  2. Ruonan Jia (2 papers)
  3. Jie Liu (492 papers)
  4. Yinmin Zhang (11 papers)
  5. Yazhe Niu (16 papers)
  6. Yaodong Yang (169 papers)
  7. Yu Liu (786 papers)
  8. Wanli Ouyang (358 papers)

Summary

We haven't generated a summary for this paper yet.