Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 93 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 128 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Smooth Trade-off for Tensor PCA via Sharp Bounds for Kikuchi Matrices (2510.03061v1)

Published 3 Oct 2025 in cs.DS and cs.CC

Abstract: In this work, we revisit algorithms for Tensor PCA: given an order-$r$ tensor of the form $T = G+\lambda \cdot v{\otimes r}$ where $G$ is a random symmetric Gaussian tensor with unit variance entries and $v$ is an unknown boolean vector in ${\pm 1}n$, what's the minimum $\lambda$ at which one can distinguish $T$ from a random Gaussian tensor and more generally, recover $v$? As a result of a long line of work, we know that for any $\ell \in \N$, there is a $n{O(\ell)}$ time algorithm that succeeds when the signal strength $\lambda \gtrsim \sqrt{\log n} \cdot n{-r/4} \cdot \ell{1/2-r/4}$. The question of whether the logarithmic factor is necessary turns out to be crucial to understanding whether larger polynomial time allows recovering the signal at a lower signal strength. Such a smooth trade-off is necessary for tensor PCA being a candidate problem for quantum speedups[SOKB25]. It was first conjectured by [WAM19] and then, more recently, with an eye on smooth trade-offs, reiterated in a blogpost of Bandeira. In this work, we resolve these conjectures and show that spectral algorithms based on the Kikuchi hierarchy \cite{WAM19} succeed whenever $\lambda \geq \Theta_r(1) \cdot n{-r/4} \cdot \ell{1/2-r/4}$ where $\Theta_r(1)$ only hides an absolute constant independent of $n$ and $\ell$. A sharp bound such as this was previously known only for $\ell \leq 3r/4$ via non-asymptotic techniques in random matrix theory inspired by free probability.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.