2000 character limit reached
Variational Inference for Policy Gradient (1802.07833v2)
Published 21 Feb 2018 in cs.LG, cs.AI, and stat.ML
Abstract: Inspired by the seminal work on Stein Variational Inference and Stein Variational Policy Gradient, we derived a method to generate samples from the posterior variational parameter distribution by \textit{explicitly} minimizing the KL divergence to match the target distribution in an amortize fashion. Consequently, we applied this varational inference technique into vanilla policy gradient, TRPO and PPO with Bayesian Neural Network parameterizations for reinforcement learning problems.