Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-scale Online Feature Selection for Ultra-high Dimensional Sparse Data (1409.7794v3)

Published 27 Sep 2014 in cs.LG and cs.CV

Abstract: Feature selection with large-scale high-dimensional data is important yet very challenging in machine learning and data mining. Online feature selection is a promising new paradigm that is more efficient and scalable than batch feature section methods, but the existing online approaches usually fall short in their inferior efficacy as compared with batch approaches. In this paper, we present a novel second-order online feature selection scheme that is simple yet effective, very fast and extremely scalable to deal with large-scale ultra-high dimensional sparse data streams. The basic idea is to improve the existing first-order online feature selection methods by exploiting second-order information for choosing the subset of important features with high confidence weights. However, unlike many second-order learning methods that often suffer from extra high computational cost, we devise a novel smart algorithm for second-order online feature selection using a MaxHeap-based approach, which is not only more effective than the existing first-order approaches, but also significantly more efficient and scalable for large-scale feature selection with ultra-high dimensional sparse data, as validated from our extensive experiments. Impressively, on a billion-scale synthetic dataset (1-billion dimensions, 1-billion nonzero features, and 1-million samples), our new algorithm took only 8 minutes on a single PC, which is orders of magnitudes faster than traditional batch approaches. \url{http://arxiv.org/abs/1409.7794}

Citations (12)

Summary

We haven't generated a summary for this paper yet.