Machine-learning complexity controlled by thickness and convexity gap

Establish that for deep neural networks whose latent space Ω satisfies the C-GNP property, upper bounds on the thickness function norm ∥τ_Ω∥_∞ and the convexity gap γ(Ω) imply upper bounds on the architecture’s complexity (e.g., number of layers and width) and on the network’s approximation capacity.

Background

The paper introduces two quantitative geometric measures for C-GNP domains: the thickness function τ_Ω and the convexity gap γ(Ω), and studies their regularity and stability under Hausdorff convergence. These measures aim to quantify how domains depart from convexity and how ‘thick’ they are relative to the reference convex set C.

Motivated by applications, the authors conjecture a direct link between these geometric quantities and neural network complexity when the network’s latent space satisfies the C-GNP constraint.

References

Conjecture For a deep neural network whose latent space $\Omega$ satisfies the $C$-GNP property, the complexity of the architecture (number of layers, width) is controlled by $|\tau_{\Omega}|_{\infty}$ and $\gamma(\Omega)$. More precisely, a bound on these two measures implies a bound on the approximation capacity of the network.

Geometric Properties of Level Sets for Domains under Geometric Normal Property  (2603.30026 - Barkatou, 31 Mar 2026) in Applications and Perspectives (Machine Learning subsection)