Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Function approximation by deep neural networks with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$ (2103.08659v3)

Published 15 Mar 2021 in stat.ML and cs.LG

Abstract: In this paper it is shown that $C_\beta$-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters ${0,\pm \frac{1}{2}, \pm 1, 2}$. The $l_0$ and $l_1$ parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in $[-1,1]$.

Citations (4)

Summary

We haven't generated a summary for this paper yet.