Fast rates for GLMs via strong convexity on the simplex

Establish that exploiting strong convexity of the generalized linear model loss over the probability simplex Δ_d yields fast excess prediction risk rates of order O(log d / n) for procedures based on exponentiated gradient descent or KL-divergence regularization in the model aggregation setting, thereby improving the slow-rate bounds currently derived under general convexity.

Background

In the model aggregation setting over the simplex, the paper derives slow-rate bounds of order √((log d)/n) for exponentiated gradient descent and KL-regularized estimators without requiring exponential concavity or i.i.d. assumptions. Prior works show that fast rates O((log d)/n) can be achieved under stronger assumptions (e.g., exponential concavity or strong convexity).

The authors explicitly conjecture that leveraging strong convexity of the GLM loss on Δ_d may recover such fast rates for their framework, and they note that developing this refinement is left for future work.

References

We conjecture that exploiting strong convexity of the loss on Δ may recover fast rates for GLMs. However, we leave this refinement to future work.

Basic Inequalities for First-Order Optimization with Applications to Statistical Risk Analysis  (2512.24999 - Paik et al., 31 Dec 2025) in Section “Model aggregation with KL regularization” — Comparison with existing literature