Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Training of Vector Quantized Bottleneck Models (2005.08520v1)

Published 18 May 2020 in cs.LG, cs.CL, and stat.ML

Abstract: In this paper we demonstrate methods for reliable and efficient training of discrete representation using Vector-Quantized Variational Auto-Encoder models (VQ-VAEs). Discrete latent variable models have been shown to learn nontrivial representations of speech, applicable to unsupervised voice conversion and reaching state-of-the-art performance on unit discovery tasks. For unsupervised representation learning, they became viable alternatives to continuous latent variable models such as the Variational Auto-Encoder (VAE). However, training deep discrete variable models is challenging, due to the inherent non-differentiability of the discretization operation. In this paper we focus on VQ-VAE, a state-of-the-art discrete bottleneck model shown to perform on par with its continuous counterparts. It quantizes encoder outputs with on-line $k$-means clustering. We show that the codebook learning can suffer from poor initialization and non-stationarity of clustered encoder outputs. We demonstrate that these can be successfully overcome by increasing the learning rate for the codebook and periodic date-dependent codeword re-initialization. As a result, we achieve more robust training across different tasks, and significantly increase the usage of latent codewords even for large codebooks. This has practical benefit, for instance, in unsupervised representation learning, where large codebooks may lead to disentanglement of latent representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jan Chorowski (29 papers)
  2. Guillaume Sanchez (3 papers)
  3. Ricard Marxer (21 papers)
  4. Nanxin Chen (30 papers)
  5. Hans J. G. A. Dolfing (3 papers)
  6. Sameer Khurana (26 papers)
  7. Tanel Alumäe (14 papers)
  8. Antoine Laurent (22 papers)
  9. Adrian Łańcucki (12 papers)
Citations (51)