Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning for Quantitative Trading (2312.15730v1)

Published 25 Dec 2023 in q-fin.TR

Abstract: AI and Machine Learning (ML) are transforming the domain of Quantitative Trading (QT) through the deployment of advanced algorithms capable of sifting through extensive financial datasets to pinpoint lucrative investment openings. AI-driven models, particularly those employing ML techniques such as deep learning and reinforcement learning, have shown great prowess in predicting market trends and executing trades at a speed and accuracy that far surpass human capabilities. Its capacity to automate critical tasks, such as discerning market conditions and executing trading strategies, has been pivotal. However, persistent challenges exist in current QT methods, especially in effectively handling noisy and high-frequency financial data. Striking a balance between exploration and exploitation poses another challenge for AI-driven trading agents. To surmount these hurdles, our proposed solution, QTNet, introduces an adaptive trading model that autonomously formulates QT strategies through an intelligent trading agent. Incorporating deep reinforcement learning (DRL) with imitative learning methodologies, we bolster the proficiency of our model. To tackle the challenges posed by volatile financial datasets, we conceptualize the QT mechanism within the framework of a Partially Observable Markov Decision Process (POMDP). Moreover, by embedding imitative learning, the model can capitalize on traditional trading tactics, nurturing a balanced synergy between discovery and utilization. For a more realistic simulation, our trading agent undergoes training using minute-frequency data sourced from the live financial market. Experimental findings underscore the model's proficiency in extracting robust market features and its adaptability to diverse market conditions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Memory-based control with recurrent neural networks. arXiv preprint arXiv:1512.04455, 2015.
  2. Deep q-learning from demonstrations. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  3. A deep reinforcement learning framework for the financial portfolio management problem. arXiv preprint arXiv:1706.10059, 2017.
  4. More interpretable graph similarity computation via maximum common subgraph inference. arXiv preprint arXiv:2208.04580, 2022.
  5. Aednet: Adaptive edge-deleting network for subgraph matching. Pattern Recognition, 133:109033, 2023.
  6. Sub-gmn: The subgraph matching network model. arXiv preprint arXiv:2104.00186, 2021.
  7. Rcsearcher: Reaction center identification in retrosynthesis via deep q-learning. arXiv preprint arXiv:2301.12071, 2023.
  8. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  9. ML Littman and AW Moore. Reinforcement learning: A survey, journal of artificial intelligence research 4, 1996.
  10. Global-aware beam search for neural abstractive summarization. Advances in Neural Information Processing Systems, 34:16545–16557, 2021.
  11. Maximum drawdown. Risk Magazine, 17(10):99–102, 2004.
  12. John J Murphy. Technical analysis of the financial markets: A comprehensive guide to trading methods and applications. Penguin, 1999.
  13. Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE international conference on robotics and automation (ICRA), pages 6292–6299. IEEE, 2018.
  14. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 661–668. JMLR Workshop and Conference Proceedings, 2010.
  15. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
  16. Reinforcement learning: An introduction. MIT press, 2018.
  17. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017.
  18. Model-based deep reinforcement learning for dynamic portfolio optimization. arXiv preprint arXiv:1901.08740, 2019.
  19. A sequential approach to market state modeling and analysis in online p2p lending. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48(1):21–33, 2017.
  20. Use neural networks to recognize students’ handwritten letters and incorrect symbols. arXiv preprint arXiv:2309.06221, 2023.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com