Semicoarse Correlated Equilibria and LP-Based Guarantees for Gradient Dynamics in Normal-Form Games (2502.20466v3)
Abstract: Projected gradient ascent is known to satisfy no-external regret as a learning algorithm. However, recent empirical work shows that projected gradient ascent often finds the Nash equilibrium in settings beyond two-player zero-sum interactions or potential games, including those where the set of coarse correlated equilibria is very large. We show that gradient ascent in fact satisfies a stronger class of linear $\Phi$-regret in normal-form games; resulting in a refined solution concept which we dub semicoarse correlated equilibria. Our theoretical analysis of the discretised Bertrand competition mirrors those recently established for mean-based learning in first-price auctions. With at least two firms of lowest marginal cost, Nash equilibria emerge as the only semicoarse equilibria under concavity conditions on firm profits. In first-price auctions, the granularity of the bid space affects semicoarse equilibria, but finer granularity for lower bids also induces convergence to Nash equilibria. Unlike previous work that aims to prove convergence to a Nash equilibrium that often relies on epoch based analysis and probability theoretic machinery, our LP-based duality approach enables a simple and tractable analysis of equilibrium selection under gradient-based learning.