Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Scale Clustering with Variational EM for Gaussian Mixture Models (1810.00803v4)

Published 1 Oct 2018 in stat.ML and cs.LG

Abstract: This paper represents a preliminary (pre-reviewing) version of a sublinear variational algorithm for isotropic Gaussian mixture models (GMMs). Further developments of the algorithm for GMMs with diagonal covariance matrices (instead of isotropic clusters) and their corresponding benchmarking results have been published by TPAMI (doi:10.1109/TPAMI.2021.3133763) in the paper "A Variational EM Acceleration for Efficient Clustering at Very Large Scales". We kindly refer the reader to the TPAMI paper instead of this much earlier arXiv version (the TPAMI paper is also open access). Publicly available source code accompanies the paper (see https://github.com/variational-sublinear-clustering). Please note that the TPAMI paper does not contain the benchmark on the 80 Million Tiny Images dataset anymore because we followed the call of the dataset creators to discontinue the use of that dataset. The aim of the project (which resulted in this arXiv version and the later TPAMI paper) is the exploration of the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions. To reduce computational complexity, we used a clustering objective based on truncated variational EM (which reduces complexity for many clusters) in combination with coreset objectives (which reduce complexity for many data points). We used efficient coreset construction and efficient seeding to translate the theoretical sublinear complexity gains into an efficient algorithm. In applications to standard large-scale benchmarks for clustering, we then observed substantial wall-clock speedups compared to already highly efficient clustering approaches. To demonstrate that the observed efficiency enables applications previously considered unfeasible, we clustered the entire and unscaled 80 Million Tiny Images dataset into up to 32,000 clusters.

Citations (13)

Summary

We haven't generated a summary for this paper yet.