Papers
Topics
Authors
Recent
Search
2000 character limit reached

Calibration of Derivative Pricing Models: a Multi-Agent Reinforcement Learning Perspective

Published 14 Mar 2022 in q-fin.CP, cs.AI, cs.LG, and q-fin.MF | (2203.06865v4)

Abstract: One of the most fundamental questions in quantitative finance is the existence of continuous-time diffusion models that fit market prices of a given set of options. Traditionally, one employs a mix of intuition, theoretical and empirical analysis to find models that achieve exact or approximate fits. Our contribution is to show how a suitable game theoretical formulation of this problem can help solve this question by leveraging existing developments in modern deep multi-agent reinforcement learning to search in the space of stochastic processes. Our experiments show that we are able to learn local volatility, as well as path-dependence required in the volatility process to minimize the price of a Bermudan option. Our algorithm can be seen as a particle method \textit{`{a} la} Guyon \textit{et} Henry-Labordere where particles, instead of being designed to ensure $\sigma_{loc}(t,S_t)2 = \mathbb{E}[\sigma_t2|S_t]$, are learning RL-driven agents cooperating towards more general calibration targets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Sig-SDEs model for quantitative finance. Proceedings of the First ACM International Conference on AI in Finance.
  2. Learning to Optimize: A Primer and A Benchmark. https://arxiv.org/abs/2103.12828.
  3. Training Stronger Baselines for Learning to Optimize. NeurIPS.
  4. A generative adversarial network approach to calibration of local stochastic volatility models. Risks.
  5. Fey, M. (2012). Symmetric games with only asymmetric equilibria. Games and Economic Behavior, 75(1):424–427.
  6. Learning to communicate with deep multi-agent reinforcement learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, page 2145–2153.
  7. Volatility is rough. Quantitative Finance.
  8. Robust pricing and hedging via neural SDEs. https://arxiv.org/abs/2007.04154.
  9. Cooperative multi-agent control using deep reinforcement learning. In Autonomous Agents and Multiagent Systems, pages 66–83. Springer International Publishing.
  10. Guyon, J. (2014). Path-dependent volatility. Risk Magazine.
  11. Being particular about calibration. Risk Magazine.
  12. Recent Advances in Reinforcement Learning in Finance. https://arxiv.org/abs/2112.04553.
  13. Hefti, A. (2017). Equilibria in symmetric games: Theory and applications. Theoretical Economics.
  14. Jaimungal, S. (2022). Reinforcement learning and stochastic optimisation. Finance and Stochastics, 26:103–129.
  15. Evolution Strategies for Approximate Solution of Bayesian Games. Proceedings of the AAAI Conference on Artificial Intelligence.
  16. RLlib: Abstractions for distributed reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3053–3062.
  17. Valuing american options by simulation: A simple least-squares approach. The Review of Financial Studies., 14:113–147.
  18. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, page 1928–1937. JMLR.org.
  19. High-Dimensional Continuous Control Using Generalized Advantage Estimation. ICLR.
  20. Proximal Policy Optimization Algorithms. https://arxiv.org/abs/1707.06347.
  21. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489.
  22. Reinforcement Learning: An Introduction. The MIT Press, second edition.
  23. Towards multi-agent reinforcement learning driven over-the-counter market simulations. Mathematical Finance, Special Issue on Machine Learning in Finance.
  24. Calibration of Shared Equilibria in General Sum Partially Observable Markov Games. Advances in Neural Information Processing Systems (NeurIPS).
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.