Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations

Published 17 Jun 2024 in cs.LG and stat.ML | (2406.11828v1)

Abstract: We study the computational and sample complexity of learning a target function $f_:\mathbb{R}d\to\mathbb{R}$ with additive structure, that is, $f_(x) = \frac{1}{\sqrt{M}}\sum_{m=1}M f_m(\langle x, v_m\rangle)$, where $f_1,f_2,...,f_M:\mathbb{R}\to\mathbb{R}$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features ${v_m}{m=1}M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of "skills" that are often localized in distinct parts of the trained network. We prove that a large subset of polynomial $f*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the information exponent of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.

Citations (6)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 32 likes about this paper.