Machine-learning complexity controlled by thickness and convexity gap
Establish that for deep neural networks whose latent space Ω satisfies the C-GNP property, upper bounds on the thickness function norm ∥τ_Ω∥_∞ and the convexity gap γ(Ω) imply upper bounds on the architecture’s complexity (e.g., number of layers and width) and on the network’s approximation capacity.
References
Conjecture For a deep neural network whose latent space $\Omega$ satisfies the $C$-GNP property, the complexity of the architecture (number of layers, width) is controlled by $|\tau_{\Omega}|_{\infty}$ and $\gamma(\Omega)$. More precisely, a bound on these two measures implies a bound on the approximation capacity of the network.
— Geometric Properties of Level Sets for Domains under Geometric Normal Property
(2603.30026 - Barkatou, 31 Mar 2026) in Applications and Perspectives (Machine Learning subsection)