Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SARC: Soft Actor Retrospective Critic (2306.16503v1)

Published 28 Jun 2023 in cs.LG and cs.AI

Abstract: The two-time scale nature of SAC, which is an actor-critic algorithm, is characterised by the fact that the critic estimate has not converged for the actor at any given time, but since the critic learns faster than the actor, it ensures eventual consistency between the two. Various strategies have been introduced in literature to learn better gradient estimates to help achieve better convergence. Since gradient estimates depend upon the critic, we posit that improving the critic can provide a better gradient estimate for the actor at each time. Utilizing this, we propose Soft Actor Retrospective Critic (SARC), where we augment the SAC critic loss with another loss term - retrospective loss - leading to faster critic convergence and consequently, better policy gradient estimates for the actor. An existing implementation of SAC can be easily adapted to SARC with minimal modifications. Through extensive experimentation and analysis, we show that SARC provides consistent improvement over SAC on benchmark environments. We plan to open-source the code and all experiment data at: https://github.com/sukritiverma1996/SARC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Joshua Achiam. Spinning Up in Deep Reinforcement Learning. 2018.
  2. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086, 2016.
  3. The actor-critic algorithm as multi-time-scale stochastic approximation. Sadhana, 22(4):525–543, 1997.
  4. Pybullet, a python module for physics simulation for games, robotics and machine learning. 2016.
  5. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pages 1587–1596. PMLR, 2018.
  6. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pages 1861–1870. PMLR, 2018.
  7. Deep reinforcement learning that matters. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  8. Retrospective loss: Looking back to improve training of deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1123–1131, 2020.
  9. Is q-learning provably efficient? arXiv preprint arXiv:1807.03765, 2018.
  10. Actor-critic algorithms. In Advances in neural information processing systems, pages 1008–1014. Citeseer, 2000.
  11. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  12. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  13. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015.
  14. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937. PMLR, 2016.
  15. Trust region policy optimization. In International conference on machine learning, pages 1889–1897. PMLR, 2015.
  16. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  17. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387–395. PMLR, 2014.
  18. Reinforcement learning: An introduction. MIT press, 2018.
  19. Policy gradient methods for reinforcement learning with function approximation. In NIPs, volume 99, pages 1057–1063. Citeseer, 1999.
  20. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
  21. Issues in using function approximation for reinforcement learning. In Proceedings of the Fourth Connectionist Models Summer School, pages 255–263. Hillsdale, NJ, 1993.
  22. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109.
  23. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
  24. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016.
  25. Marco A Wiering. Convergence and divergence in standard and averaging reinforcement learning. In European Conference on Machine Learning, pages 477–488. Springer, 2004.
  26. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
  27. A finite time analysis of two time-scale actor critic methods. arXiv preprint arXiv:2005.01350, 2020.
  28. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint arXiv:1708.05144, 2017.

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com