Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sharpened Error Bounds for Random Sampling Based $\ell_2$ Regression

Published 30 Mar 2014 in cs.LG, cs.NA, and stat.ML | (1403.7737v2)

Abstract: Given a data matrix $X \in R{n\times d}$ and a response vector $y \in R{n}$, suppose $n>d$, it costs $O(n d2)$ time and $O(n d)$ space to solve the least squares regression (LSR) problem. When $n$ and $d$ are both large, exactly solving the LSR problem is very expensive. When $n \gg d$, one feasible approach to speeding up LSR is to randomly embed $y$ and all columns of $X$ into a smaller subspace $Rc$; the induced LSR problem has the same number of columns but much fewer number of rows, and it can be solved in $O(c d2)$ time and $O(c d)$ space. We discuss in this paper two random sampling based methods for solving LSR more efficiently. Previous work showed that the leverage scores based sampling based LSR achieves $1+\epsilon$ accuracy when $c \geq O(d \epsilon{-2} \log d)$. In this paper we sharpen this error bound, showing that $c = O(d \log d + d \epsilon{-1})$ is enough for achieving $1+\epsilon$ accuracy. We also show that when $c \geq O(\mu d \epsilon{-2} \log d)$, the uniform sampling based LSR attains a $2+\epsilon$ bound with positive probability.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.