2000 character limit reached
Negative results for approximation using single layer and multilayer feedforward neural networks (1810.10032v4)
Published 23 Oct 2018 in cs.LG and stat.ML
Abstract: We prove a negative result for the approximation of functions defined on compact subsets of $\mathbb{R}d$ (where $d \geq 2$) using feedforward neural networks with one hidden layer and arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions that are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general $d \in \mathbb{N}$) for neural networks with an \emph{arbitrary} number of hidden layers, for activation functions that are either rational functions or continuous splines with finitely many pieces.