Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Soft Policy Gradient Method for Maximum Entropy Deep Reinforcement Learning (1909.03198v1)

Published 7 Sep 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Maximum entropy deep reinforcement learning (RL) methods have been demonstrated on a range of challenging continuous tasks. However, existing methods either suffer from severe instability when training on large off-policy data or cannot scale to tasks with very high state and action dimensionality such as 3D humanoid locomotion. Besides, the optimality of desired Boltzmann policy set for non-optimal soft value function is not persuasive enough. In this paper, we first derive soft policy gradient based on entropy regularized expected reward objective for RL with continuous actions. Then, we present an off-policy actor-critic, model-free maximum entropy deep RL algorithm called deep soft policy gradient (DSPG) by combining soft policy gradient with soft BeLLMan equation. To ensure stable learning while eliminating the need of two separate critics for soft value functions, we leverage double sampling approach to making the soft BeLLMan equation tractable. The experimental results demonstrate that our method outperforms in performance over off-policy prior methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Wenjie Shi (6 papers)
  2. Shiji Song (103 papers)
  3. Cheng Wu (31 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.