2000 character limit reached
Augment-Reinforce-Merge Policy Gradient for Binary Stochastic Policy (1903.05284v1)
Published 13 Mar 2019 in cs.LG, cs.AI, and stat.ML
Abstract: Due to the high variance of policy gradients, on-policy optimization algorithms are plagued with low sample efficiency. In this work, we propose Augment-Reinforce-Merge (ARM) policy gradient estimator as an unbiased low-variance alternative to previous baseline estimators on tasks with binary action space, inspired by the recent ARM gradient estimator for discrete random variable models. We show that the ARM policy gradient estimator achieves variance reduction with theoretical guarantees, and leads to significantly more stable and faster convergence of policies parameterized by neural networks.