Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Competitive and Collusive Behaviors in Algorithmic Pricing with Deep Reinforcement Learning (2503.11270v1)

Published 14 Mar 2025 in econ.GN and q-fin.EC

Abstract: Nowadays, a significant share of the business-to-consumer sector is based on online platforms like Amazon and Alibaba and uses AI for pricing strategies. This has sparked debate on whether pricing algorithms may tacitly collude to set supra-competitive prices without being explicitly designed to do so. Our study addresses these concerns by examining the risk of collusion when Reinforcement Learning (RL) algorithms are used to decide on pricing strategies in competitive markets. Prior research in this field focused on Tabular Q-learning (TQL) and led to opposing views on whether learning-based algorithms can result in supra-competitive prices. Building on this, our work contributes to this ongoing discussion by providing a more nuanced numerical study that goes beyond TQL, additionally capturing off- and on- policy Deep Reinforcement Learning (DRL) algorithms, two distinct families of DRL algorithms that recently gained attention for algorithmic pricing. We study multiple Bertrand oligopoly variants and show that algorithmic collusion depends on the algorithm used. In our experiments, we observed that TQL tends to exhibit higher collusion and price dispersion. Moreover, it suffers from instability and disparity, as agents with higher learning rates consistently achieve higher profits, and it lacks robustness in state representation, with pricing dynamics varying significantly based on information access. In contrast, DRL algorithms, such as PPO and DQN, generally converge to lower prices closer to the Nash equilibrium. Additionally, we show that when pre-trained TQL agents interact with DRL agents, the latter quickly outperforms the former, highlighting the advantages of DRL in pricing competition. Lastly, we find that competition between heterogeneous DRL algorithms, such as PPO and DQN, tends to reduce the likelihood of supra-competitive pricing.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com