Papers
Topics
Authors
Recent
2000 character limit reached

On the Gap Between Strict-Saddles and True Convexity: An Omega(log d) Lower Bound for Eigenvector Approximation

Published 14 Apr 2017 in cs.LG, cs.DS, cs.IT, math.CO, math.IT, and stat.ML | (1704.04548v1)

Abstract: We prove a \emph{query complexity} lower bound on rank-one principal component analysis (PCA). We consider an oracle model where, given a symmetric matrix $M \in \mathbb{R}{d \times d}$, an algorithm is allowed to make $T$ \emph{exact} queries of the form $w{(i)} = Mv{(i)}$ for $i \in {1,\dots,T}$, where $v{(i)}$ is drawn from a distribution which depends arbitrarily on the past queries and measurements ${v{(j)},w{(j)}}_{1 \le j \le i-1}$. We show that for a small constant $\epsilon$, any adaptive, randomized algorithm which can find a unit vector $\widehat{v}$ for which $\widehat{v}{\top}M\widehat{v} \ge (1-\epsilon)|M|$, with even small probability, must make $T = \Omega(\log d)$ queries. In addition to settling a widely-held folk conjecture, this bound demonstrates a fundamental gap between convex optimization and "strict-saddle" non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension. Our argument proceeds via a reduction to estimating the rank-one spike in a deformed Wigner model. We establish lower bounds for this model by developing a "truncated" analogue of the $\chi2$ Bayes-risk lower bound of Chen et al.

Citations (11)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.