Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Local Landscape of Phase Retrieval Under Limited Samples (2311.15221v2)

Published 26 Nov 2023 in cs.IT, cs.LG, eess.SP, math.IT, math.OC, math.ST, stat.ML, and stat.TH

Abstract: In this paper, we present a fine-grained analysis of the local landscape of phase retrieval under the regime of limited samples. Specifically, we aim to ascertain the minimal sample size required to guarantee a benign local landscape surrounding global minima in high dimensions. Let $n$ and $d$ denote the sample size and input dimension, respectively. We first explore the local convexity and establish that when $n=o(d\log d)$, for almost every fixed point in the local ball, the Hessian matrix has negative eigenvalues, provided $d$ is sufficiently large. % Consequently, the local landscape is highly non-convex. We next consider the one-point convexity and show that, as long as $n=\omega(d)$, with high probability, the landscape is one-point strongly convex in the local annulus: ${w\in\mathbb{R}d: o_d(1)\leqslant |w-w*|\leqslant c}$, where $w*$ is the ground truth and $c$ is an absolute constant. This implies that gradient descent, initialized from any point in this domain, can converge to an $o_d(1)$-loss solution exponentially fast. Furthermore, we show that when $n=o(d\log d)$, there is a radius of $\widetilde\Theta\left(\sqrt{1/d}\right)$ such that one-point convexity breaks down in the corresponding smaller local ball. This indicates an impossibility of establishing a convergence to the exact $w*$ for gradient descent under limited samples by relying solely on one-point convexity.

Citations (1)

Summary

We haven't generated a summary for this paper yet.