Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimum width for universal approximation using ReLU networks on compact domain (2309.10402v2)

Published 19 Sep 2023 in cs.LG and stat.ML

Abstract: It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small. There were several attempts to characterize the minimum width $w_{\min}$ enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for $Lp$ approximation of $Lp$ functions from $[0,1]{d_x}$ to $\mathbb R{d_y}$ is exactly $\max{d_x,d_y,2}$ if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result for ReLU networks, $w_{\min}=\max{d_x+1,d_y}$ when the domain is $\smash{\mathbb R{d_x}}$, our result first shows that approximation on a compact domain requires smaller width than on $\smash{\mathbb R{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform approximation using general activation functions including ReLU: $w_{\min}\ge d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a dichotomy between $Lp$ and uniform approximations for general activation functions and input/output dimensions.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com