Fast rates for GLMs via strong convexity on the simplex
Establish that exploiting strong convexity of the generalized linear model loss over the probability simplex Δ_d yields fast excess prediction risk rates of order O(log d / n) for procedures based on exponentiated gradient descent or KL-divergence regularization in the model aggregation setting, thereby improving the slow-rate bounds currently derived under general convexity.
References
We conjecture that exploiting strong convexity of the loss on Δ may recover fast rates for GLMs. However, we leave this refinement to future work.
— Basic Inequalities for First-Order Optimization with Applications to Statistical Risk Analysis
(2512.24999 - Paik et al., 31 Dec 2025) in Section “Model aggregation with KL regularization” — Comparison with existing literature