Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An intelligent financial portfolio trading strategy using deep Q-learning (1907.03665v4)

Published 8 Jul 2019 in q-fin.PM and cs.AI

Abstract: Portfolio traders strive to identify dynamic portfolio allocation schemes so that their total budgets are efficiently allocated through the investment horizon. This study proposes a novel portfolio trading strategy in which an intelligent agent is trained to identify an optimal trading action by using deep Q-learning. We formulate a Markov decision process model for the portfolio trading process, and the model adopts a discrete combinatorial action space, determining the trading direction at prespecified trading size for each asset, to ensure practical applicability. Our novel portfolio trading strategy takes advantage of three features to outperform in real-world trading. First, a mapping function is devised to handle and transform an initially found but infeasible action into a feasible action closest to the originally proposed ideal action. Second, by overcoming the dimensionality problem, this study establishes models of agent and Q-network for deriving a multi-asset trading strategy in the predefined action space. Last, this study introduces a technique that has the advantage of deriving a well-fitted multi-asset trading strategy by designing an agent to simulate all feasible actions in each state. To validate our approach, we conduct backtests for two representative portfolios and demonstrate superior results over the benchmark strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hyungjun Park (5 papers)
  2. Min Kyu Sim (1 paper)
  3. Dong Gu Choi (4 papers)
Citations (81)

Summary

Intelligent Financial Portfolio Trading Using Deep Q-Learning

The paper "An intelligent financial portfolio trading strategy using deep Q-learning" by Hyungjun Park, Min Kyu Sim, and Dong Gu Choi presents an innovative approach to financial portfolio management by employing Deep Q-Learning (DQL), a form of reinforcement learning to optimize trading strategies. In the field of financial portfolio trading, the primary objective is to maximize returns relative to risk across an investment horizon, while dynamically responding to market conditions.

Approach and Methodology

The authors formulate a Markov Decision Process (MDP) model tailored to the portfolio trading problem, translating it into a DQL framework. The model incorporates a discrete combinatorial action space, which simplifies decision-making processes by quantifying buying and selling directions for each asset at pre-defined trading sizes. This is particularly advantageous as it mirrors practical trading actions that a trader might undertake, leading to immediate applicability in real-world settings.

The paper highlights several unique aspects of their approach:

  1. Mapping Function: To overcome infeasibility in discrete action spaces, the authors propose a mapping function that translates infeasible actions into feasible ones that are closest to the ideal action. This prevents selection of impractical actions that could contribute to increased transaction costs.
  2. Dimensionality Challenge: Given the exponential increase in action space size with additional assets, traditional RL models struggle with dimensionality. The authors address this by structuring the agent and Q-network to efficiently handle multi-asset scenarios.
  3. Simulation of Feasible Actions: In an innovative twist, the paper suggests simulating all feasible actions within each state to expand the experiential learning of the agent. This circumvents the limitation of insufficient training data by maximizing the learning potential from available data, a move that could enhance model robustness significantly.

Experimental Validation

The efficacy of the proposed methodology was validated using two portfolios: a US-based portfolio consisting of ETFs tracking major indexes, and a Korean portfolio based on KOSPI indexes. Benchmarking against traditional strategies — buy-and-hold, random selection, momentum, and reversion — the DQN-derived strategy demonstrated superior cumulative returns and Sharpe ratios, underscoring its effectiveness in balancing risk and return. Moreover, the average turnover rate of the DQN strategy was competitive, indicating that it offers realistic applicability without incurring excessive transaction costs.

Implications and Future Work

The implications of this research are both practical and theoretical. From a practical standpoint, this approach provides a scalable framework for implementing RL in financial trading, which can dynamically adjust to market conditions, potentially leading to more robust portfolio management strategies. The ability to directly apply derived actions in real-world trading scenarios represents a significant advancement over previous RL models, which often required additional decision-making to translate abstract outputs into practical actions.

Theoretically, the paper contributes to the growing body of research seeking to merge advanced machine learning techniques with complex decision-making processes in finance. Future work could address current limitations such as the scalability of action space and incorporate risk measures directly into the reward structure to further refine the balance between risk and returns. Additionally, exploration into heterogeneous asset classes and expanding tactical flexibility in response to unforeseen market dynamics could be promising directions for subsequent development.

In conclusion, this paper represents a significant step forward in leveraging reinforcement learning, specifically deep Q-learning, for intelligent portfolio trading strategies that are not only computationally feasible but also practically effective in real-world financial decisions.

X Twitter Logo Streamline Icon: https://streamlinehq.com