Explaining the looseness of existing DNN generalization bounds
Identify and characterize the factors that make existing generalization bounds for deep neural networks loose in practice, and determine why these bounds are suboptimal relative to empirical performance.
References
Still, in practice these generalization bounds tend to be very loose \citep{jiang_2019_fantastic_generalization} and it is unclear what makes them suboptimal.
— Deep Learning as a Convex Paradigm of Computation: Minimizing Circuit Size with ResNets
(2511.20888 - Jacot, 25 Nov 2025) in Related Works, Introduction ("Generalization Bounds" subsection)