Computation complexity of deep ReLU neural networks in high-dimensional approximation (2103.00815v1)
Abstract: The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in H\"older-Nikol'skii spaces of mixed smoothness $H_\infty\alpha(\mathbb{I}d)$ on the unit cube $\mathbb{I}d:=[0,1]d$. In this context, for any function $f\in H_\infty\alpha(\mathbb{I}d)$, we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.