Approximation in $L^p(μ)$ with deep ReLU neural networks (1904.04789v1)
Abstract: We discuss the expressive power of neural networks which use the non-smooth ReLU activation function $\varrho(x) = \max{0,x}$ by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in $Lp(\lambda)$ for the Lebesgue measure $\lambda$--- can be generalized to approximation in $Lp(\mu)$, for any finite Borel measure $\mu$. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.