Dice Question Streamline Icon: https://streamlinehq.com

Any-time last-iterate convergence of aggressive regularization schedules (e.g., doubling trick)

Determine whether aggressive regularization schedules, such as applying the doubling trick to the regularization strength in regularized gradient-based learning dynamics for games, guarantee true any-time last-iterate convergence to Nash equilibria (i.e., that the iterate at every round is an ε(T)-approximate Nash equilibrium for all T).

Information Square Streamline Icon: https://streamlinehq.com

Background

In the discussion of last-iterate convergence with gradient feedback, the paper reviews regularization-based approaches that modify the game to become strongly monotone, enabling linear last-iterate convergence on the modified problem. When the regularization strength is set based on the target accuracy ε, the resulting guarantees apply only to the final iteration and require knowledge of the horizon T, thus failing to provide any-time last-iterate convergence.

Using a diminishing regularization schedule yields any-time last-iterate convergence but at a slower rate of Õ(T{-1/4}). The authors note that more aggressive schedules, such as the doubling trick, might potentially improve rates, but it is not clear whether such methods provide the stronger any-time guarantee required for every iteration.

References

One might wonder whether more aggressive schedules, such as the doubling trick, could improve this. However, it remains unclear whether such methods guarantee true any-time convergence.

From Average-Iterate to Last-Iterate Convergence in Games: A Reduction and Its Applications (2506.03464 - Cai et al., 4 Jun 2025) in Related Works — Last-Iterate Convergence with Gradient Feedback