Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Competitive Training of Mixtures of Independent Deep Generative Models (1804.11130v4)

Published 30 Apr 2018 in cs.LG, cs.AI, and stat.ML

Abstract: A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure. Standard unsupervised learning, however, is often concerned with training a single model to capture the overall distribution or aspects thereof. Inspired by clustering approaches, we consider mixtures of implicit generative models that ``disentangle'' the independent generative mechanisms underlying the data. Relying on an additional set of discriminators, we propose a competitive training procedure in which the models only need to capture the portion of the data distribution from which they can produce realistic samples. As a by-product, each model is simpler and faster to train. We empirically show that our approach splits the training distribution in a sensible way and increases the quality of the generated samples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Francesco Locatello (92 papers)
  2. Damien Vincent (25 papers)
  3. Ilya Tolstikhin (21 papers)
  4. Gunnar Rätsch (59 papers)
  5. Sylvain Gelly (43 papers)
  6. Bernhard Schölkopf (412 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.