Papers
Topics
Authors
Recent
Search
2000 character limit reached

List Decodable Learning via Sum of Squares

Published 12 May 2019 in cs.DS | (1905.04660v1)

Abstract: In the list-decodable learning setup, an overwhelming majority (say a $1-\beta$-fraction) of the input data consists of outliers and the goal of an algorithm is to output a small list $\mathcal{L}$ of hypotheses such that one of them agrees with inliers. We develop a framework for list-decodable learning via the Sum-of-Squares SDP hierarchy and demonstrate it on two basic statistical estimation problems {\it Linear regression:} Suppose we are given labelled examples ${(X_i,y_i)}{i \in [N]}$ containing a subset $S$ of $\beta N$ {\it inliers} ${X_i }{i \in S}$ that are drawn i.i.d. from standard Gaussian distribution $N(0,I)$ in $\mathbb{R}d$, where the corresponding labels $y_i$ are well-approximated by a linear function $\ell$. We devise an algorithm that outputs a list $\mathcal{L}$ of linear functions such that there exists some $\hat{\ell} \in \mathcal{L}$ that is close to $\ell$. This yields the first algorithm for linear regression in a list-decodable setting. Our results hold for any distribution of examples whose concentration and anticoncentration can be certified by Sum-of-Squares proofs. {\it Mean Estimation:} Given data points ${X_i}{i \in [N]}$ containing a subset $S$ of $\beta N$ {\it inliers} ${X_i }{i \in S}$ that are drawn i.i.d. from a Gaussian distribution $N(\mu,I)$ in $\mathbb{R}d$, we devise an algorithm that generates a list $\mathcal{L}$ of means such that there exists $\hat{\mu} \in \mathcal{L}$ close to $\mu$. The recovery guarantees of the algorithm are analogous to the existing algorithms for the problem by Diakonikolas \etal and Kothari \etal. In an independent and concurrent work, Karmalkar \etal \cite{KlivansKS19} also obtain an algorithm for list-decodable linear regression using the Sum-of-Squares SDP hierarchy.

Citations (64)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.