Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization (2205.07547v2)

Published 16 May 2022 in cs.LG and cs.CV

Abstract: One noted issue of vector-quantized variational autoencoder (VQ-VAE) is that the learned discrete representation uses only a fraction of the full capacity of the codebook, also known as codebook collapse. We hypothesize that the training scheme of VQ-VAE, which involves some carefully designed heuristics, underlies this issue. In this paper, we propose a new training scheme that extends the standard VAE via novel stochastic dequantization and quantization, called stochastically quantized variational autoencoder (SQ-VAE). In SQ-VAE, we observe a trend that the quantization is stochastic at the initial stage of the training but gradually converges toward a deterministic quantization, which we call self-annealing. Our experiments show that SQ-VAE improves codebook utilization without using common heuristics. Furthermore, we empirically show that SQ-VAE is superior to VAE and VQ-VAE in vision- and speech-related tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yuhta Takida (32 papers)
  2. Takashi Shibuya (32 papers)
  3. Chieh-Hsin Lai (32 papers)
  4. Junki Ohmura (3 papers)
  5. Toshimitsu Uesaka (17 papers)
  6. Naoki Murata (29 papers)
  7. Shusuke Takahashi (31 papers)
  8. Toshiyuki Kumakura (5 papers)
  9. Yuki Mitsufuji (127 papers)
  10. Weihsiang Liao (4 papers)
Citations (49)