Revisit Policy Optimization in Matrix Form (1909.09186v1)
Abstract: In tabular case, when the reward and environment dynamics are known, policy evaluation can be written as $\bm{V}{\bm{\pi}} = (I - \gamma P{\bm{\pi}}){-1} \bm{r}{\bm{\pi}}$, where $P{\bm{\pi}}$ is the state transition matrix given policy ${\bm{\pi}}$ and $\bm{r}{\bm{\pi}}$ is the reward signal given ${\bm{\pi}}$. What annoys us is that $P{\bm{\pi}}$ and $\bm{r}_{\bm{\pi}}$ are both mixed with ${\bm{\pi}}$, which means every time when we update ${\bm{\pi}}$, they will change together. In this paper, we leverage the notation from \cite{wang2007dual} to disentangle ${\bm{\pi}}$ and environment dynamics which makes optimization over policy more straightforward. We show that policy gradient theorem \cite{sutton2018reinforcement} and TRPO \cite{schulman2015trust} can be put into a more general framework and such notation has good potential to be extended to model-based reinforcement learning.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.