Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Exploration through Latent Trajectory Optimization in Deep Deterministic Policy Gradient (1911.06833v1)

Published 15 Nov 2019 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: Model-free reinforcement learning algorithms such as Deep Deterministic Policy Gradient (DDPG) often require additional exploration strategies, especially if the actor is of deterministic nature. This work evaluates the use of model-based trajectory optimization methods used for exploration in Deep Deterministic Policy Gradient when trained on a latent image embedding. In addition, an extension of DDPG is derived using a value function as critic, making use of a learned deep dynamics model to compute the policy gradient. This approach leads to a symbiotic relationship between the deep reinforcement learning algorithm and the latent trajectory optimizer. The trajectory optimizer benefits from the critic learned by the RL algorithm and the latter from the enhanced exploration generated by the planner. The developed methods are evaluated on two continuous control tasks, one in simulation and one in the real world. In particular, a Baxter robot is trained to perform an insertion task, while only receiving sparse rewards and images as observations from the environment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kevin Sebastian Luck (11 papers)
  2. Mel Vecerik (14 papers)
  3. Simon Stepputtis (38 papers)
  4. Heni Ben Amor (43 papers)
  5. Jonathan Scholz (7 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.