Dice Question Streamline Icon: https://streamlinehq.com

Effect of delay on momentum strategy returns in DRLP-MG

Determine whether introducing a delay effect into the Dual Reinforcement Learning Policies Minority Game (DRLP-MG) causes the emergent momentum strategy in the Q-subpopulation’s external synergy cluster to produce higher long-term returns rather than lower returns.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper introduces a Dual Reinforcement Learning Policies Minority Game (DRLP-MG) with a Q-learning subpopulation and a classical-policy subpopulation, uncovering synergy mechanisms and a first-order phase transition in resource allocation.

Within the Q-learning subpopulation, an external synergy cluster (ES-cluster) self-organizes a momentum strategy that improves system-level resource utilization but yields lower long-term rewards for adopters in the model. The authors note that real markets often report higher returns for momentum strategies and raise the question of whether adding a delay effect to the model could reverse the return outcome.

References

Firstly, while our model predicts the momentum strategy will yield lower returns, it’s unclear if adding a delay effect can transform it to generate higher returns in practice.

Dual Reinforcement Learning Synergy in Resource Allocation: Emergence of Self-Organized Momentum Strategy (2509.11161 - Zhang et al., 14 Sep 2025) in Section 5 (Discussion and Conclusion)