Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential sampling for optimal weighted least squares approximations in hierarchical spaces (1805.10801v1)

Published 28 May 2018 in math.NA, math.PR, math.ST, and stat.TH

Abstract: We consider the problem of approximating an unknown function $u\in L2(D,\rho)$ from its evaluations at given sampling points $x1,\dots,xn\in D$, where $D\subset \mathbb{R}d$ is a general domain and $\rho$ is a probability measure. The approximation is picked in a linear space $V_m$ where $m=\dim(V_m)$ and computed by a weighted least squares method. Recent results show the advantages of picking the sampling points at random according to a well-chosen probability measure $\mu$ that depends both on $V_m$ and $\rho$. With such a random design, the weighted least squares approximation is proved to be stable with high probability, and having precision comparable to that of the exact $L2(D,\rho)$-orthonormal projection onto $V_m$, in a near-linear sampling regime $n\sim{m\log m}$. The present paper is motivated by the adaptive approximation context, in which one typically generates a nested sequence of spaces $(V_m){m\geq1}$ with increasing dimension. Although the measure $\mu=\mu_m$ changes with $V_m$, it is possible to recycle the previously generated samples by interpreting $\mu_m$ as a mixture between $\mu{m-1}$ and an update measure $\sigma_m$. Based on this observation, we discuss sequential sampling algorithms that maintain the stability and approximation properties uniformly over all spaces $V_m$. Our main result is that the total number of computed sample at step $m$ remains of the order $m\log{m}$ with high probability. Numerical experiments confirm this analysis.

Summary

We haven't generated a summary for this paper yet.