Papers
Topics
Authors
Recent
Search
2000 character limit reached

Uniform Approximation with Quadratic Neural Networks

Published 11 Jan 2022 in cs.LG and math.FA | (2201.03747v3)

Abstract: In this work, we examine the approximation capabilities of deep neural networks utilizing the Rectified Quadratic Unit (ReQU) activation function, defined as (\max(0,x)2), for approximating H\"older-regular functions with respect to the uniform norm. We constructively prove that deep neural networks with ReQU activation can approximate any function within the (R)-ball of (r)-H\"older-regular functions ((\mathcal{H}{r, R}([-1,1]d))) up to any accuracy (\epsilon ) with at most (\mathcal{O}\left(\epsilon{-d /2r}\right)) neurons and fixed number of layers. This result highlights that the effectiveness of the approximation depends significantly on the smoothness of the target function and the characteristics of the ReQU activation function. Our proof is based on approximating local Taylor expansions with deep ReQU neural networks, demonstrating their ability to capture the behavior of H\"older-regular functions effectively. Furthermore, the results can be straightforwardly generalized to any Rectified Power Unit (RePU) activation function of the form (\max(0,x)p) for (p \geq 2), indicating the broader applicability of our findings within this family of activations.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.