Any-time last-iterate convergence of aggressive regularization schedules (e.g., doubling trick)
Determine whether aggressive regularization schedules, such as applying the doubling trick to the regularization strength in regularized gradient-based learning dynamics for games, guarantee true any-time last-iterate convergence to Nash equilibria (i.e., that the iterate at every round is an ε(T)-approximate Nash equilibrium for all T).
Sponsor
References
One might wonder whether more aggressive schedules, such as the doubling trick, could improve this. However, it remains unclear whether such methods guarantee true any-time convergence.
— From Average-Iterate to Last-Iterate Convergence in Games: A Reduction and Its Applications
(2506.03464 - Cai et al., 4 Jun 2025) in Related Works — Last-Iterate Convergence with Gradient Feedback