Sobolev Approximation of Deep ReLU Network in Log-weighted Barron Space
Abstract: Universal approximation theorems show that neural networks can approximate any continuous function; however, the number of parameters may grow exponentially with the ambient dimension, so these results do not fully explain the practical success of deep models on high-dimensional data. Barron space theory addresses this: if a target function belongs to a Barron space, a two-layer network with $n$ parameters achieves an $O(n{-1/2})$ approximation error in $L2$. Yet classical Barron spaces $\mathscr{B}{s+1}$ still require stronger regularity than Sobolev spaces $Hs$, and existing depth-sensitive results often assume constraints such as $sL \le 1/2$. In this paper, we introduce a log-weighted Barron space $\mathscr{B}{\log}$, which requires a strictly weaker assumption than $\mathscr{B}s$ for any $s>0$. For this new function space, we first study embedding properties and carry out a statistical analysis via the Rademacher complexity. Then we prove that functions in $\mathscr{B}{\log}$ can be approximated by deep ReLU networks with explicit depth dependence. We then define a family $\mathscr{B}{s,\log}$, establish approximation bounds in the $H1$ norm, and identify maximal depth scales under which these rates are preserved. Our results clarify how depth reduces regularity requirements for efficient representation, offering a more precise explanation for the performance of deep architectures beyond the classical Barron setting, and for their stable use in high-dimensional problems used today.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.