Develop quantized learning theory for momentum-based algorithms
Establish rigorous excess risk bounds for momentum-based optimization algorithms (such as SGD with momentum) under the quantization framework introduced in this paper, where data features, labels, parameters, activations, and output gradients are quantized (via operators Q_d, Q_l, Q_p, Q_a, Q_o) for high-dimensional linear regression. The objective is to characterize the population risk of iterate-averaged training with momentum under practical quantization constraints.
References
Our limitations are twofold: (i) we only establish excess risk upper bounds without a corresponding lower-bound analysis, and (ii) our analysis is confined to one-pass SGD, leaving multi-pass SGD and algorithms with momentum as open problems.
— Learning under Quantization for High-Dimensional Linear Regression
(2510.18259 - Zhang et al., 21 Oct 2025) in Conclusion and Limitations