Papers
Topics
Authors
Recent
Search
2000 character limit reached

Minimum width for universal approximation using ReLU networks on compact domain

Published 19 Sep 2023 in cs.LG and stat.ML | (2309.10402v2)

Abstract: It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small. There were several attempts to characterize the minimum width $w_{\min}$ enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for $Lp$ approximation of $Lp$ functions from $[0,1]{d_x}$ to $\mathbb R{d_y}$ is exactly $\max{d_x,d_y,2}$ if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result for ReLU networks, $w_{\min}=\max{d_x+1,d_y}$ when the domain is $\smash{\mathbb R{d_x}}$, our result first shows that approximation on a compact domain requires smaller width than on $\smash{\mathbb R{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform approximation using general activation functions including ReLU: $w_{\min}\ge d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a dichotomy between $Lp$ and uniform approximations for general activation functions and input/output dimensions.

Authors (3)
Citations (7)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 7 likes about this paper.