Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarking Robustness of Deep Reinforcement Learning approaches to Online Portfolio Management (2306.10950v1)

Published 19 Jun 2023 in cs.LG and q-fin.PM

Abstract: Deep Reinforcement Learning approaches to Online Portfolio Selection have grown in popularity in recent years. The sensitive nature of training Reinforcement Learning agents implies a need for extensive efforts in market representation, behavior objectives, and training processes, which have often been lacking in previous works. We propose a training and evaluation process to assess the performance of classical DRL algorithms for portfolio management. We found that most Deep Reinforcement Learning algorithms were not robust, with strategies generalizing poorly and degrading quickly during backtesting.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. “Bridging the Gap Between Markowitz Planning and Deep Reinforcement Learning” In ICAPS PRL, 2020 URL: https://icaps20subpages.icaps-conference.org/wp-content/uploads/2020/11/PRL2020-proceedings.pdf
  2. “Measuring the Reliability of Reinforcement Learning Algorithms”, 2019 URL: https://openreview.net/forum?id=SJlpYJBKvH
  3. Ricard Durall “Asset Allocation: From Markowitz to Deep Reinforcement Learning” arXiv, 2022 arXiv: http://arxiv.org/abs/2208.07158
  4. “Deep Reinforcement Learning That Matters” Number: 1 In Proceedings of the AAAI Conference on Artificial Intelligence 32.1, 2018 DOI: 10.1609/aaai.v32i1.11694
  5. Zhengyao Jiang, Dixing Xu and Jinjun Liang “A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem” In arXiv:1706.10059, 2017 arXiv: http://arxiv.org/abs/1706.10059
  6. “Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation” ZSCC: 0000008 In arXiv:1907.01503, 2019 arXiv: http://arxiv.org/abs/1907.01503
  7. “Adversarial Deep Reinforcement Learning in Portfolio Management” ZSCC: 0000042 In arXiv:1808.09940, 2018 arXiv: http://arxiv.org/abs/1808.09940
  8. “Continuous control with deep reinforcement learning” In 4th International Conference on Learning Representations, 2016 URL: http://arxiv.org/abs/1509.02971
  9. “Human-level control through deep reinforcement learning” ZSCC: NoCitationData[s0] In Nature 518.7540, 2015, pp. 529–533 DOI: 10/gc3h75
  10. “Robust Reinforcement Learning: A Review of Foundations and Recent Advances” Number: 1 Publisher: Multidisciplinary Digital Publishing Institute In Machine Learning and Knowledge Extraction 4.1, 2022, pp. 276–315 DOI: 10.3390/make4010013
  11. “High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning” ZSCC: 0000000 In 2022 IEEE Symposium on Computational Intelligence for Financial Engineering and Economics (CIFEr), 2021 arXiv: http://arxiv.org/abs/2112.04755
  12. Adarsh Subbaswamy, Roy Adams and Suchi Saria “Evaluating Model Robustness and Stability to Dataset Shift” ISSN: 2640-3498 In International Conference on Artificial Intelligence and Statistics PMLR, 2021, pp. 2611–2619 URL: https://proceedings.mlr.press/v130/subbaswamy21a.html
  13. Sen Wang, Daoyuan Jia and Xinshuo Weng “Deep Reinforcement Learning for Autonomous Driving” arXiv, 2019 DOI: 10.48550/arXiv.1811.11329
  14. “Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy” ZSCC: 0000009, 2020 DOI: 10.2139/ssrn.3690996
  15. “Reinforcement-Learning Based Portfolio Management with Augmented Asset Movement Prediction States” Number: 01 In Proceedings of the AAAI Conference on Artificial Intelligence 34.1, 2020, pp. 1112–1119 DOI: 10.1609/aaai.v34i01.5462
  16. “Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning” ZSCC: 0000008 In IEEE Transactions on Knowledge and Data Engineering PP, 2020, pp. 1–1 DOI: 10/gj6rzg
Citations (1)

Summary

We haven't generated a summary for this paper yet.