Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixture-of-Experts Variational Autoencoder for Clustering and Generating from Similarity-Based Representations on Single Cell Data (1910.07763v3)

Published 17 Oct 2019 in cs.LG and stat.ML

Abstract: Clustering high-dimensional data, such as images or biological measurements, is a long-standingproblem and has been studied extensively. Recently, Deep Clustering has gained popularity due toits flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model.The model can learn multi-modal distributions of high-dimensional data and use these to generaterealistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder(VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts.Additionally, we encourage the lower dimensional latent representation of our model to follow aGaussian mixture distribution and to accurately represent the similarities between the data points. Weassess the performance of our model on the MNIST benchmark data set and challenging real-worldtasks of clustering mouse organs from single-cell RNA-sequencing measurements and defining cellsubpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets.MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to thebaselines as well as competitor methods.

Citations (11)

Summary

We haven't generated a summary for this paper yet.