Fundamental Limits of Learning High-dimensional Simplices in Noisy Regimes (2506.10101v1)
Abstract: In this paper, we establish sample complexity bounds for learning high-dimensional simplices in $\mathbb{R}K$ from noisy data. Specifically, we consider $n$ i.i.d. samples uniformly drawn from an unknown simplex in $\mathbb{R}K$, each corrupted by additive Gaussian noise of unknown variance. We prove an algorithm exists that, with high probability, outputs a simplex within $\ell_2$ or total variation (TV) distance at most $\varepsilon$ from the true simplex, provided $n \ge (K2/\varepsilon2) e{\mathcal{O}(K/\mathrm{SNR}2)}$, where $\mathrm{SNR}$ is the signal-to-noise ratio. Extending our prior work~\citep{saberi2023sample}, we derive new information-theoretic lower bounds, showing that simplex estimation within TV distance $\varepsilon$ requires at least $n \ge \Omega(K3 \sigma2/\varepsilon2 + K/\varepsilon)$ samples, where $\sigma2$ denotes the noise variance. In the noiseless scenario, our lower bound $n \ge \Omega(K/\varepsilon)$ matches known upper bounds up to constant factors. We resolve an open question by demonstrating that when $\mathrm{SNR} \ge \Omega(K{1/2})$, noisy-case complexity aligns with the noiseless case. Our analysis leverages sample compression techniques (Ashtiani et al., 2018) and introduces a novel Fourier-based method for recovering distributions from noisy observations, potentially applicable beyond simplex learning.
Collections
Sign up for free to add this paper to one or more collections.