Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$L^p$ sampling numbers for the Fourier-analytic Barron space (2208.07605v1)

Published 16 Aug 2022 in math.FA, cs.LG, and stat.ML

Abstract: In this paper, we consider Barron functions $f : [0,1]d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as [ f(x) = \int_{\mathbb{R}d} F(\xi) \, e{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}d} |F(\xi)| \cdot (1 + |\xi|){\sigma} \, d \xi < \infty. ] For $\sigma = 1$, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given $m$ point samples $f(x_1),\dots,f(x_m)$ of an unknown Barron function $f : [0,1]d \to \mathbb{R}$ of smoothness $\sigma$, how well can $f$ be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in $Lp$ by $s_m (\sigma; Lp)$, we show that [ m{- \frac{1}{\max { p,2 }} - \frac{\sigma}{d}} \lesssim s_m(\sigma;Lp) \lesssim (\ln (e + m)){\alpha(\sigma,d) / p} \cdot m{- \frac{1}{\max { p,2 }} - \frac{\sigma}{d}} , ] where the implied constants only depend on $\sigma$ and $d$ and where $\alpha(\sigma,d)$ stays bounded as $d \to \infty$.

Citations (6)

Summary

We haven't generated a summary for this paper yet.