Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SQ Lower Bounds for Learning Single Neurons with Massart Noise (2210.09949v1)

Published 18 Oct 2022 in cs.LG, cs.DS, math.ST, stat.ML, and stat.TH

Abstract: We study the problem of PAC learning a single neuron in the presence of Massart noise. Specifically, for a known activation function $f: \mathbb{R} \to \mathbb{R}$, the learner is given access to labeled examples $(\mathbf{x}, y) \in \mathbb{R}d \times \mathbb{R}$, where the marginal distribution of $\mathbf{x}$ is arbitrary and the corresponding label $y$ is a Massart corruption of $f(\langle \mathbf{w}, \mathbf{x} \rangle)$. The goal of the learner is to output a hypothesis $h: \mathbb{R}d \to \mathbb{R}$ with small squared loss. For a range of activation functions, including ReLUs, we establish super-polynomial Statistical Query (SQ) lower bounds for this learning problem. In more detail, we prove that no efficient SQ algorithm can approximate the optimal error within any constant factor. Our main technical contribution is a novel SQ-hard construction for learning ${ \pm 1}$-weight Massart halfspaces on the Boolean hypercube that is interesting on its own right.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ilias Diakonikolas (160 papers)
  2. Daniel M. Kane (128 papers)
  3. Lisheng Ren (8 papers)
  4. Yuxin Sun (15 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.