$L^p$ sampling numbers for the Fourier-analytic Barron space (2208.07605v1)
Abstract: In this paper, we consider Barron functions $f : [0,1]d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as [ f(x) = \int_{\mathbb{R}d} F(\xi) \, e{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}d} |F(\xi)| \cdot (1 + |\xi|){\sigma} \, d \xi < \infty. ] For $\sigma = 1$, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given $m$ point samples $f(x_1),\dots,f(x_m)$ of an unknown Barron function $f : [0,1]d \to \mathbb{R}$ of smoothness $\sigma$, how well can $f$ be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in $Lp$ by $s_m (\sigma; Lp)$, we show that [ m{- \frac{1}{\max { p,2 }} - \frac{\sigma}{d}} \lesssim s_m(\sigma;Lp) \lesssim (\ln (e + m)){\alpha(\sigma,d) / p} \cdot m{- \frac{1}{\max { p,2 }} - \frac{\sigma}{d}} , ] where the implied constants only depend on $\sigma$ and $d$ and where $\alpha(\sigma,d)$ stays bounded as $d \to \infty$.