Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Sparse Linear Regression (1603.02250v1)

Published 7 Mar 2016 in cs.LG

Abstract: We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an inefficient algorithm that obtains regret bounded by $\tilde{O}(\sqrt{T})$ after $T$ prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by $O(T{1-\delta})$ for any constant $\delta > 0$ unless $\text{NP} \subseteq \text{BPP}$. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dean Foster (28 papers)
  2. Satyen Kale (50 papers)
  3. Howard Karloff (6 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.