Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximation capabilities of neural networks on unbounded domains (1910.09293v8)

Published 21 Oct 2019 in cs.LG and stat.ML

Abstract: In this paper, we prove that a shallow neural network with a monotone sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function can arbitrarily well approximate any Lp(p>=2) integrable functions defined on R*[0,1]n. We also prove that a shallow neural network with a sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function expresses no nonzero integrable function defined on the Euclidean plane. Together with a recent result that the deep ReLU network can arbitrarily well approximate any integrable function on Euclidean spaces, we provide a new perspective on the advantage of multiple hidden layers in the context of ReLU networks. Lastly, we prove that the ReLU network with depth 3 is a universal approximator in Lp(Rn).

Citations (16)

Summary

We haven't generated a summary for this paper yet.