Data Efficient Training for Reinforcement Learning with Adaptive Behavior Policy Sharing (2002.05229v1)
Abstract: Deep Reinforcement Learning (RL) is proven powerful for decision making in simulated environments. However, training deep RL model is challenging in real world applications such as production-scale health-care or recommender systems because of the expensiveness of interaction and limitation of budget at deployment. One aspect of the data inefficiency comes from the expensive hyper-parameter tuning when optimizing deep neural networks. We propose Adaptive Behavior Policy Sharing (ABPS), a data-efficient training algorithm that allows sharing of experience collected by behavior policy that is adaptively selected from a pool of agents trained with an ensemble of hyper-parameters. We further extend ABPS to evolve hyper-parameters during training by hybridizing ABPS with an adapted version of Population Based Training (ABPS-PBT). We conduct experiments with multiple Atari games with up to 16 hyper-parameter/architecture setups. ABPS achieves superior overall performance, reduced variance on top 25% agents, and equivalent performance on the best agent compared to conventional hyper-parameter tuning with independent training, even though ABPS only requires the same number of environmental interactions as training a single agent. We also show that ABPS-PBT further improves the convergence speed and reduces the variance.
- Ge Liu (24 papers)
- Rui Wu (65 papers)
- Heng-Tze Cheng (16 papers)
- Jing Wang (740 papers)
- Jayden Ooi (6 papers)
- Lihong Li (72 papers)
- Ang Li (472 papers)
- Wai Lok Sibon Li (3 papers)
- Craig Boutilier (78 papers)
- Ed Chi (24 papers)