2000 character limit reached
Generative Adversarial Source Separation (1710.10779v1)
Published 30 Oct 2017 in cs.SD, cs.LG, cs.NE, and stat.ML
Abstract: Generative source separation methods such as non-negative matrix factorization (NMF) or auto-encoders, rely on the assumption of an output probability density. Generative Adversarial Networks (GANs) can learn data distributions without needing a parametric assumption on the output density. We show on a speech source separation experiment that, a multi-layer perceptron trained with a Wasserstein-GAN formulation outperforms NMF, auto-encoders trained with maximum likelihood, and variational auto-encoders in terms of source to distortion ratio.