DSAC-T: Distributional Soft Actor-Critic with Three Refinements (2310.05858v4)
Abstract: Reinforcement learning (RL) has proven to be highly effective in tackling complex decision-making and control tasks. However, prevalent model-free RL methods often face severe performance degradation due to the well-known overestimation issue. In response to this problem, we recently introduced an off-policy RL algorithm, called distributional soft actor-critic (DSAC or DSAC-v1), which can effectively improve the value estimation accuracy by learning a continuous Gaussian value distribution. Nonetheless, standard DSAC has its own shortcomings, including occasionally unstable learning processes and the necessity for task-specific reward scaling, which may hinder its overall performance and adaptability in some special tasks. This paper further introduces three important refinements to standard DSAC in order to address these shortcomings. These refinements consist of expected value substituting, twin value distribution learning, and variance-based critic gradient adjusting. The modified RL algorithm is named as DSAC with three refinements (DSAC-T or DSAC-v2), and its performances are systematically evaluated on a diverse set of benchmark tasks. Without any task-specific hyperparameter tuning, DSAC-T surpasses or matches a lot of mainstream model-free RL algorithms, including SAC, TD3, DDPG, TRPO, and PPO, in all tested environments. Additionally, DSAC-T, unlike its standard version, ensures a highly stable learning process and delivers similar performance across varying reward scales.
- S. E. Li, Reinforcement Learning for Sequential Decision and Optimal Control. Springer Verlag, Singapore, 2023.
- D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, p. 484, 2016.
- D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, p. 354, 2017.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
- T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in 4th International Conference on Learning Representations (ICLR 2016), (San Juan, Puerto Rico), 2016.
- H. van Hasselt, “Double Q-learning,” in 23rd Advances in Neural Information Processing Systems (NeurIPS 2010), (Vancouver, British Columbia, Canada), pp. 2613–2621, 2010.
- H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” in Proceedings of the 30th Conference on Artificial Intelligence (AAAI 2016), (Phoenix, Arizona,USA), pp. 2094–2100, 2016.
- S. Fujimoto, H. van Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1587–1596, PMLR, 2018.
- T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1861–1870, PMLR, 2018.
- T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, et al., “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905, 2018.
- J. Duan, Y. Guan, S. E. Li, Y. Ren, Q. Sun, and B. Cheng, “Distributional soft actor-critic: Off-policy reinforcement learning for addressing value estimation errors,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6584–6598, 2021.
- Y. Guan, S. E. Li, J. Duan, J. Li, Y. Ren, Q. Sun, and B. Cheng, “Direct and indirect reinforcement learning,” International Journal of Intelligent Systems, vol. 36, no. 8, pp. 4439–4467, 2021.
- W. Wang, Y. Zhang, J. Gao, Y. Jiang, Y. Yang, Z. Zheng, W. Zou, J. Li, C. Zhang, W. Cao, et al., “GOPS: A general optimal control problem solver for autonomous driving and industrial control applications,” Communications in Transportation Research, vol. 3, p. 100096, 2023.
- T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, “Reinforcement learning with deep energy-based policies,” in Proceedings of the 34th International Conference on Machine Learning, (ICML 2017), (Sydney, NSW, Australia), pp. 1352–1361, PMLR, 2017.
- Y. Ren, J. Duan, S. E. Li, Y. Guan, and Q. Sun, “Improving generalization of reinforcement learning with minimax distributional soft actor-critic,” in 23rd IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2020), (Rhodes, Greece), IEEE, 2020.
- J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz, “Trust region policy optimization,” in Proceedings of the 32nd International Conference on Machine Learning, (ICML 2015), (Lille, France), pp. 1889–1897, 2015.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- M. G. Bellemare, W. Dabney, and R. Munos, “A distributional perspective on reinforcement learning,” in Proceedings of the 34th International Conference on Machine Learning, (ICML 2017), (Sydney, NSW, Australia), pp. 449–458, PMLR, 2017.
- W. Dabney, M. Rowland, M. G. Bellemare, and R. Munos, “Distributional reinforcement learning with quantile regression,” in Proceedings of the 32nd Conference on Artificial Intelligence, (AAAI 2018), (New Orleans, Louisiana, USA), pp. 2892–2901, 2018.
- W. Dabney, G. Ostrovski, D. Silver, and R. Munos, “Implicit quantile networks for distributional reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1096–1105, PMLR, 2018.
- D. Yang, L. Zhao, Z. Lin, T. Qin, J. Bian, and T.-Y. Liu, “Fully parameterized quantile function for distributional reinforcement learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- M. Rowland, R. Dadashi, S. Kumar, R. Munos, M. G. Bellemare, and W. Dabney, “Statistics and samples in distributional reinforcement learning,” in Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), (Long Beach, CA, USA), pp. 5528–5536, PMLR, 2019.
- B. Mavrin, H. Yao, L. Kong, K. Wu, and Y. Yu, “Distributional reinforcement learning for efficient exploration,” in Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), (Long Beach, CA, USA), pp. 4424–4434, PMLR, 2019.
- G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, D. TB, A. Muldal, N. Heess, and T. P. Lillicrap, “Distributed distributional deterministic policy gradients,” in 6th International Conference on Learning Representations, (ICLR 2018), (Vancouver, BC, Canada), 2018.
- C. Tessler, G. Tennenholtz, and S. Mannor, “Distributional policy optimization: An alternative approach for continuous control,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- W. Dabney, Z. Kurth-Nelson, N. Uchida, C. K. Starkweather, D. Hassabis, R. Munos, and M. Botvinick, “A distributional code for value in dopamine-based reinforcement learning,” Nature, pp. 1–5, 2020.
- Jingliang Duan (42 papers)
- Wenxuan Wang (128 papers)
- Liming Xiao (3 papers)
- Jiaxin Gao (22 papers)
- Shengbo Eben Li (98 papers)