Dice Question Streamline Icon: https://streamlinehq.com

Explaining the looseness of existing DNN generalization bounds

Identify and characterize the factors that make existing generalization bounds for deep neural networks loose in practice, and determine why these bounds are suboptimal relative to empirical performance.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper surveys several families of generalization bounds for deep networks (e.g., path norms, product of layer norms, covering-number-based bounds) and notes that, empirically, these bounds are often very loose. Understanding the concrete reasons for this looseness could guide the development of tighter theoretical guarantees and connect to the proposed circuit-size perspective.

References

Still, in practice these generalization bounds tend to be very loose \citep{jiang_2019_fantastic_generalization} and it is unclear what makes them suboptimal.

Deep Learning as a Convex Paradigm of Computation: Minimizing Circuit Size with ResNets (2511.20888 - Jacot, 25 Nov 2025) in Related Works, Introduction ("Generalization Bounds" subsection)