Transfer global L2 error guarantees from NTK analyses to deep networks
Establish global L2 error bounds for deep neural network estimators trained by gradient descent that are implied by neural tangent kernel analyses, overcoming the limitation that NTK equivalence often holds only pointwise and does not readily yield guarantees for the overall L2 prediction error.
References
It was observed by Nitanda and Suzuki (2021) that in most studies in the neural tangent kernel setting the equivalence to deep neural networks holds only pointwise and not for the global $L_2$ error, which is crucial for predictions problems in practice. So from results derived in the neural tangent kernel setting it is often not clear how the $L_2$ error of the deep neural network estimate behaves.
— Statistically guided deep learning
(2504.08489 - Kohler et al., 11 Apr 2025) in Section 1.9 (Discussion of related results), Neural tangent kernel setting