Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Elastic Net Regularization for Sparse Linear Models (1505.06449v3)

Published 24 May 2015 in cs.LG

Abstract: This paper presents an algorithm for efficient training of sparse linear models with elastic net regularization. Extending previous work on delayed updates, the new algorithm applies stochastic gradient updates to non-zero features only, bringing weights current as needed with closed-form updates. Closed-form delayed updates for the $\ell_1$, $\ell_{\infty}$, and rarely used $\ell_2$ regularizers have been described previously. This paper provides closed-form updates for the popular squared norm $\ell2_2$ and elastic net regularizers. We provide dynamic programming algorithms that perform each delayed update in constant time. The new $\ell2_2$ and elastic net methods handle both fixed and varying learning rates, and both standard {stochastic gradient descent} (SGD) and {forward backward splitting (FoBoS)}. Experimental results show that on a bag-of-words dataset with $260,941$ features, but only $88$ nonzero features on average per training example, the dynamic programming method trains a logistic regression classifier with elastic net regularization over $2000$ times faster than otherwise.

Citations (4)

Summary

We haven't generated a summary for this paper yet.