Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model-based Deep Reinforcement Learning for Dynamic Portfolio Optimization (1901.08740v1)

Published 25 Jan 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Dynamic portfolio optimization is the process of sequentially allocating wealth to a collection of assets in some consecutive trading periods, based on investors' return-risk profile. Automating this process with machine learning remains a challenging problem. Here, we design a deep reinforcement learning (RL) architecture with an autonomous trading agent such that, investment decisions and actions are made periodically, based on a global objective, with autonomy. In particular, without relying on a purely model-free RL agent, we train our trading agent using a novel RL architecture consisting of an infused prediction module (IPM), a generative adversarial data augmentation module (DAM) and a behavior cloning module (BCM). Our model-based approach works with both on-policy or off-policy RL algorithms. We further design the back-testing and execution engine which interact with the RL agent in real time. Using historical {\em real} financial market data, we simulate trading with practical constraints, and demonstrate that our proposed model is robust, profitable and risk-sensitive, as compared to baseline trading strategies and model-free RL agents from prior work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pengqian Yu (19 papers)
  2. Joon Sern Lee (4 papers)
  3. Ilya Kulyatin (1 paper)
  4. Zekun Shi (10 papers)
  5. Sakyasingha Dasgupta (16 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.