Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition (1802.07301v3)

Published 20 Feb 2018 in cs.LG, cs.DS, and stat.ML

Abstract: We establish connections between the problem of learning a two-layer neural network and tensor decomposition. We consider a model with feature vectors $\boldsymbol x \in \mathbb Rd$, $r$ hidden units with weights ${\boldsymbol w_i}{1\le i \le r}$ and output $y\in \mathbb R$, i.e., $y=\sum{i=1}r \sigma( \boldsymbol w_i{\mathsf T}\boldsymbol x)$, with activation functions given by low-degree polynomials. In particular, if $\sigma(x) = a_0+a_1x+a_3x3$, we prove that no polynomial-time learning algorithm can outperform the trivial predictor that assigns to each example the response variable $\mathbb E(y)$, when $d{3/2}\ll r\ll d2$. Our conclusion holds for a `natural data distribution', namely standard Gaussian feature vectors $\boldsymbol x$, and output distributed according to a two-layer neural network with random isotropic weights, and under a certain complexity-theoretic assumption on tensor decomposition. Roughly speaking, we assume that no polynomial-time algorithm can substantially outperform current methods for tensor decomposition based on the sum-of-squares hierarchy. We also prove generalizations of this statement for higher degree polynomial activations, and non-random weight vectors. Remarkably, several existing algorithms for learning two-layer networks with rigorous guarantees are based on tensor decomposition. Our results support the idea that this is indeed the core computational difficulty in learning such networks, under the stated generative model for the data. As a side result, we show that under this model learning the network requires accurate learning of its weights, a property that does not hold in a more general setting.

Citations (56)

Summary

We haven't generated a summary for this paper yet.