Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Canonical Transform for Strengthening the Local $L^p$-Type Universal Approximation Property (2006.14378v3)

Published 24 Jun 2020 in cs.LG, cs.NE, math.FA, and stat.ML

Abstract: Most $Lp$-type universal approximation theorems guarantee that a given machine learning model class $\mathscr{F}\subseteq C(\mathbb{R}d,\mathbb{R}D)$ is dense in $Lp_{\mu}(\mathbb{R}d,\mathbb{R}D)$ for any suitable finite Borel measure $\mu$ on $\mathbb{R}d$. Unfortunately, this means that the model's approximation quality can rapidly degenerate outside some compact subset of $\mathbb{R}d$, as any such measure is largely concentrated on some bounded subset of $\mathbb{R}d$. This paper proposes a generic solution to this approximation theoretic problem by introducing a canonical transformation which "upgrades $\mathscr{F}$'s approximation property" in the following sense. The transformed model class, denoted by $\mathscr{F}\text{-tope}$, is shown to be dense in $Lp_{\mu,\text{strict}}(\mathbb{R}d,\mathbb{R}D)$ which is a topological space whose elements are locally $p$-integrable functions and whose topology is much finer than usual norm topology on $Lp_{\mu}(\mathbb{R}d,\mathbb{R}D)$; here $\mu$ is any suitable $\sigma$-finite Borel measure $\mu$ on $\mathbb{R}d$. Next, we show that if $\mathscr{F}$ is any family of analytic functions then there is always a strict "gap" between $\mathscr{F}\text{-tope}$'s expressibility and that of $\mathscr{F}$, since we find that $\mathscr{F}$ can never dense in $Lp_{\mu,\text{strict}}(\mathbb{R}d,\mathbb{R}D)$. In the general case, where $\mathscr{F}$ may contain non-analytic functions, we provide an abstract form of these results guaranteeing that there always exists some function space in which $\mathscr{F}\text{-tope}$ is dense but $\mathscr{F}$ is not, while, the converse is never possible. Applications to feedforward networks, convolutional neural networks, and polynomial bases are explored.

Summary

We haven't generated a summary for this paper yet.