Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximation in $L^p(μ)$ with deep ReLU neural networks (1904.04789v1)

Published 9 Apr 2019 in math.FA and cs.LG

Abstract: We discuss the expressive power of neural networks which use the non-smooth ReLU activation function $\varrho(x) = \max{0,x}$ by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in $Lp(\lambda)$ for the Lebesgue measure $\lambda$--- can be generalized to approximation in $Lp(\mu)$, for any finite Borel measure $\mu$. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

Citations (3)

Summary

We haven't generated a summary for this paper yet.