Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trajectory-Oriented Policy Optimization with Sparse Rewards (2401.02225v3)

Published 4 Jan 2024 in cs.LG

Abstract: Mastering deep reinforcement learning (DRL) proves challenging in tasks featuring scant rewards. These limited rewards merely signify whether the task is partially or entirely accomplished, necessitating various exploration actions before the agent garners meaningful feedback. Consequently, the majority of existing DRL exploration algorithms struggle to acquire practical policies within a reasonable timeframe. To address this challenge, we introduce an approach leveraging offline demonstration trajectories for swifter and more efficient online RL in environments with sparse rewards. Our pivotal insight involves treating offline demonstration trajectories as guidance, rather than mere imitation, allowing our method to learn a policy whose distribution of state-action visitation marginally matches that of offline demonstrations. We specifically introduce a novel trajectory distance relying on maximum mean discrepancy (MMD) and cast policy optimization as a distance-constrained optimization problem. We then illustrate that this optimization problem can be streamlined into a policy-gradient algorithm, integrating rewards shaped by insights from offline demonstrations. The proposed algorithm undergoes evaluation across extensive discrete and continuous control tasks with sparse and misleading rewards. The experimental findings demonstrate the significant superiority of our proposed algorithm over baseline methods concerning diverse exploration and the acquisition of an optimal policy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
  2. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning.   PMLR, 2016, pp. 1928–1937.
  3. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, pp. 484–489, 2016.
  4. T. Yang, H. Tang, C. Bai, J. Liu, J. Hao, Z. Meng, P. Liu, and Z. Wang, “Exploration in deep reinforcement learning: a comprehensive survey,” arXiv preprint arXiv:2109.06668, 2021.
  5. G. Wang, F. Wu, X. Zhang, N. Guo, and Z. Zheng, “Adaptive trajectory-constrained exploration strategy for deep reinforcement learning,” Knowledge-Based Systems, p. 111334, 2023.
  6. H. J. Jeon, S. Milli, and A. Dragan, “Reward-rational (implicit) choice: A unifying formalism for reward learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 4415–4426, 2020.
  7. A. M. Turner, D. Hadfield-Menell, and P. Tadepalli, “Conservative agency via attainable utility preservation,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 385–391.
  8. T. R. Sumers, M. K. Ho, R. D. Hawkins, K. Narasimhan, and T. L. Griffiths, “Learning rewards from linguistic feedback,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 7, 2021, pp. 6002–6010.
  9. C. Schenck and D. Fox, “Visual closed-loop control for pouring liquids,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 2629–2636.
  10. A. Yahya, A. Li, M. Kalakrishnan, Y. Chebotar, and S. Levine, “Collective robot reinforcement learning with distributed asynchronous guided policy search,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 79–86.
  11. J. Oh, Y. Guo, S. Singh, and H. Lee, “Self-imitation learning,” in International Conference on Machine Learning.   PMLR, 2018, pp. 3878–3887.
  12. T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband et al., “Deep q-learning from demonstrations,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  13. G. Wang, F. Wu, X. Zhang, J. Liu et al., “Learning diverse policies with soft self-generated guidance,” International Journal of Intelligent Systems, vol. 2023, 2023.
  14. G. Libardi, G. De Fabritiis, and S. Dittert, “Guided exploration with proximal policy optimization using a single demonstration,” in International Conference on Machine Learning.   PMLR, 2021, pp. 6611–6620.
  15. Z. Zhu, K. Lin, B. Dai, and J. Zhou, “Self-adaptive imitation learning: learning tasks with delayed rewards from sub-optimal demonstrations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 9269–9277.
  16. A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. Smola, “A kernel method for the two-sample-problem,” Advances in neural information processing systems, vol. 19, pp. 513–520, 2006.
  17. A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B. K. Sriperumbudur, “Optimal kernel choice for large-scale two-sample tests,” in Advances in neural information processing systems.   Citeseer, 2012, pp. 1205–1213.
  18. G. Wang, F. Wu, X. Zhang, and T. Chen, “Policy optimization with smooth guidance rewards learned from sparse-reward demonstrations,” arXiv preprint arXiv:2401.00162, 2023.
  19. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  20. R. S. Sutton, D. A. McAllester, S. P. Singh, Y. Mansour et al., “Policy gradient methods for reinforcement learning with function approximation.” in NIPs, vol. 99.   Citeseer, 1999, pp. 1057–1063.
  21. M. Fortunato, M. G. Azar, B. Piot, J. Menick, I. Osband, A. Graves, V. Mnih, R. Munos, D. Hassabis, O. Pietquin et al., “Noisy networks for exploration,” arXiv preprint arXiv:1706.10295, 2019.
  22. Y. Guo, J. Oh, S. Singh, and H. Lee, “Generative adversarial self-imitation learning,” arXiv preprint arXiv:1812.00950, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Guojian Wang (10 papers)
  2. Faguo Wu (7 papers)
  3. Xiao Zhang (435 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets