Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-Order Approximation Rates for Shallow Neural Networks with Cosine and ReLU$^k$ Activation Functions (2012.07205v7)

Published 14 Dec 2020 in math.NA and cs.NA

Abstract: We study the approximation properties of shallow neural networks with an activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness in the spectral Barron space of the underlying function $f$ to be approximated. We show that as the smoothness index $s$ of $f$ increases, shallow neural networks with ReLU$k$ activation function obtain an improved approximation rate up to a best possible rate of $O(n{-(k+1)}\log(n))$ in $L2$, independent of the dimension $d$. The significance of this result is that the activation function ReLU$k$ is fixed independent of the dimension, while for classical methods the degree of polynomial approximation or the smoothness of the wavelets used would have to increase in order to take advantage of the dimension dependent smoothness of $f$. In addition, we derive improved approximation rates for shallow neural networks with cosine activation function on the spectral Barron space. Finally, we prove lower bounds showing that the approximation rates attained are optimal under the given assumptions.

Citations (53)

Summary

We haven't generated a summary for this paper yet.