Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Direct and indirect reinforcement learning (1912.10600v2)

Published 23 Dec 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. In this paper, we classify RL into direct and indirect RL according to how they seek the optimal policy of the Markov decision process problem. The former solves the optimal policy by directly maximizing an objective function using gradient descent methods, in which the objective function is usually the expectation of accumulative future rewards. The latter indirectly finds the optimal policy by solving the BeLLMan equation, which is the sufficient and necessary condition from BeLLMan's principle of optimality. We study policy gradient forms of direct and indirect RL and show that both of them can derive the actor-critic architecture and can be unified into a policy gradient with the approximate value function and the stationary state distribution, revealing the equivalence of direct and indirect RL. We employ a Gridworld task to verify the influence of different forms of policy gradient, suggesting their differences and relationships experimentally. Finally, we classify current mainstream RL algorithms using the direct and indirect taxonomy, together with other ones including value-based and policy-based, model-based and model-free.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yang Guan (22 papers)
  2. Shengbo Eben Li (98 papers)
  3. Jingliang Duan (42 papers)
  4. Jie Li (553 papers)
  5. Yangang Ren (13 papers)
  6. Qi Sun (114 papers)
  7. Bo Cheng (51 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.