Extend quantized learning analysis to multi-pass SGD
Establish rigorous excess risk bounds for multi-pass stochastic gradient descent under the quantization framework introduced in this paper, where data features, labels, parameters, activations, and output gradients are quantized (via operators Q_d, Q_l, Q_p, Q_a, Q_o) for high-dimensional linear regression. The goal is to characterize the population risk of iterate-averaged multi-pass SGD with data reuse under these practical quantization constraints.
References
Our limitations are twofold: (i) we only establish excess risk upper bounds without a corresponding lower-bound analysis, and (ii) our analysis is confined to one-pass SGD, leaving multi-pass SGD and algorithms with momentum as open problems.
— Learning under Quantization for High-Dimensional Linear Regression
(2510.18259 - Zhang et al., 21 Oct 2025) in Conclusion and Limitations