Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 32 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 83 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 197 tok/s Pro
2000 character limit reached

Policy Gradient in Partially Observable Environments: Approximation and Convergence (1810.07900v3)

Published 18 Oct 2018 in cs.LG and stat.ML

Abstract: Policy gradient is a generic and flexible reinforcement learning approach that generally enjoys simplicity in analysis, implementation, and deployment. In the last few decades, this approach has been extensively advanced for fully observable environments. In this paper, we generalize a variety of these advances to partially observable settings, and similar to the fully observable case, we keep our focus on the class of Markovian policies. We propose a series of technical tools, including a novel notion of advantage function, to develop policy gradient algorithms and study their convergence properties in such environments. Deploying these tools, we generalize a variety of existing theoretical guarantees, such as policy gradient and convergence theorems, to partially observable domains, those which also could be carried to more settings of interest. This study also sheds light on the understanding of policy gradient approaches in real-world applications which tend to be partially observable.

Citations (8)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.