Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model-Based Policy Gradients with Parameter-Based Exploration by Least-Squares Conditional Density Estimation (1307.5118v1)

Published 19 Jul 2013 in stat.ML and cs.LG

Abstract: The goal of reinforcement learning (RL) is to let an agent learn an optimal control policy in an unknown environment so that future expected rewards are maximized. The model-free RL approach directly learns the policy based on data samples. Although using many samples tends to improve the accuracy of policy learning, collecting a large number of samples is often expensive in practice. On the other hand, the model-based RL approach first estimates the transition model of the environment and then learns the policy based on the estimated transition model. Thus, if the transition model is accurately learned from a small amount of data, the model-based approach can perform better than the model-free approach. In this paper, we propose a novel model-based RL method by combining a recently proposed model-free policy search method called policy gradients with parameter-based exploration and the state-of-the-art transition model estimator called least-squares conditional density estimation. Through experiments, we demonstrate the practical usefulness of the proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Syogo Mori (1 paper)
  2. Voot Tangkaratt (18 papers)
  3. Tingting Zhao (19 papers)
  4. Jun Morimoto (18 papers)
  5. Masashi Sugiyama (286 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.