Uniform Approximation with Quadratic Neural Networks (2201.03747v3)
Abstract: In this work, we examine the approximation capabilities of deep neural networks utilizing the Rectified Quadratic Unit (ReQU) activation function, defined as (\max(0,x)2), for approximating H\"older-regular functions with respect to the uniform norm. We constructively prove that deep neural networks with ReQU activation can approximate any function within the (R)-ball of (r)-H\"older-regular functions ((\mathcal{H}{r, R}([-1,1]d))) up to any accuracy (\epsilon ) with at most (\mathcal{O}\left(\epsilon{-d /2r}\right)) neurons and fixed number of layers. This result highlights that the effectiveness of the approximation depends significantly on the smoothness of the target function and the characteristics of the ReQU activation function. Our proof is based on approximating local Taylor expansions with deep ReQU neural networks, demonstrating their ability to capture the behavior of H\"older-regular functions effectively. Furthermore, the results can be straightforwardly generalized to any Rectified Power Unit (RePU) activation function of the form (\max(0,x)p) for (p \geq 2), indicating the broader applicability of our findings within this family of activations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.