Value of a Hold Action in Discrete RL Trading Agents

Determine whether incorporating an explicit hold action into the discrete action space of a reinforcement learning stock trading agent using the gym-anytrading environment yields any measurable value compared to a buy/sell-only action space under dynamically changing market conditions, where the agent must examine trade-offs between gains and risks.

Background

The paper’s trading environment, built on gym-anytrading, uses a discrete action space with only buy and sell actions; the agent is always in the market and does not occupy a hold or do-nothing state. The authors note ambiguity between a hold state and a do-nothing state when an agent has no assets versus when it chooses to retain assets.

In discussing this ambiguity, the authors explicitly state uncertainty about whether adding a hold action would provide value, given the need for the agent to balance gains and risks in a dynamic, ever-changing dataset. This raises a concrete question about the benefit of extending the discrete action space to include hold as a distinct action.

References

It was unclear whether the hold state presented any value as the agent must effectively examine the trade-off between gains and risks for a dynamic and ever-changing dataset.

Reinforcement Learning Framework for Quantitative Trading  (2411.07585 - Yasin et al., 2024) in Subsection "Considerations", Section "Literature Review"