Latent Dirichlet Allocation in Generative Adversarial Networks (1812.06571v5)
Abstract: We study the problem of multimodal generative modelling of images based on generative adversarial networks (GANs). Despite the success of existing methods, they often ignore the underlying structure of vision data or its multimodal generation characteristics. To address this problem, we introduce the Dirichlet prior for multimodal image generation, which leads to a new Latent Dirichlet Allocation based GAN (LDAGAN). In detail, for the generative process modelling, LDAGAN defines a generative mode for each sample, determining which generative sub-process it belongs to. For the adversarial training, LDAGAN derives a variational expectation-maximization (VEM) algorithm to estimate model parameters. Experimental results on real-world datasets have demonstrated the outstanding performance of LDAGAN over other existing GANs.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.