Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Epsilon-VAE: Denoising as Visual Decoding (2410.04081v4)

Published 5 Oct 2024 in cs.CV, cs.AI, and eess.IV

Abstract: In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For high-dimensional visual data, it reduces redundancy and emphasizes key features for high-quality generation. Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input. In this work, we offer a new perspective by proposing denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder. We evaluate our approach by assessing both reconstruction (rFID) and generation quality (FID), comparing it to state-of-the-art autoencoding approaches. By adopting iterative reconstruction through diffusion, our autoencoder, namely Epsilon-VAE, achieves high reconstruction quality, which in turn enhances downstream generation quality by 22% at the same compression rates or provides 2.3x inference speedup through increasing compression rates. We hope this work offers new insights into integrating iterative generation and autoencoding for improved compression and generation.

Citations (1)

Summary

  • The paper proposes a novel visual tokenization method that replaces single-step decoding with iterative denoising for enhanced compression and quality.
  • It leverages diffusion models and conditional denoising with a U-Net backbone to iteratively refine latent representations during decoding.
  • Empirical results on ImageNet demonstrate improved rFID scores and robust resolution generalization, outperforming traditional VAE baselines.

Review of "Denoising as Visual Decoding"

The paper presents a novel approach to visual tokenization in generative modeling, proposing a method called , which fundamentally alters the conventional autoencoding paradigm by replacing deterministic decoding with an iterative denoising process. This approach integrates diffusion models into the autoencoding process, aiming to enhance both compression and generation quality of high-dimensional visual data.

Methodological Innovations

The authors propose shifting from single-step reconstruction to an iterative refinement process. The key innovation lies in employing a diffusion process as the decoder, which iteratively refines noise to recover the original image, thus providing a fresh perspective on decoding in autoencoders. This reframing seeks to improve both latent generation efficiency and decoding quality.

Key components of the methodology include:

  • Conditional Denoising: The process is reformulated into a conditional distribution, focusing on generating latent representations conducive to downstream diffusion models.
  • Architecture and Conditioning: A standard U-Net architecture is retained, with enhancements like channel-wise concatenation for conditioning inputs.
  • Objectives: A rectified flow parameterization is introduced, optimizing the diffusion trajectory for more efficient sampling. Perceptual matching and adversarial trajectory matching augment the score-matching loss to maintain high reconstruction fidelity.

Numerical Results and Comparisons

The paper reports empirical evaluations across image reconstruction and generation tasks using the ImageNet dataset. In terms of reconstruction, shows a notable improvement in rFID scores compared to traditional VAEs, maintaining advantages even as compression rates increase. Specifically, achieves better results, particularly under higher compression scenarios where stochasticity helps adapt to compression levels. This robustness is further illustrated by its effective resolution generalization capabilities.

For generation tasks, outperforms the VAE baseline across all model scales, affirming that the reconstruction gains translate effectively to generation quality. The experimental results highlight that even the base model can surpass the largest VAE model variant, indicating significant improvements through the proposed methodology.

Implications and Future Prospects

The introduction of diffusion models into autoencoding suggests a promising direction for visual tokenization, potentially influencing both theoretical and practical aspects of AI. From a theoretical perspective, this approach aligns with the rate-distortion-perception framework, enriching the discourse on the balance between compression, fidelity, and distribution alignment. Practically, it paves the way for more efficient latent representations, benefitting tasks requiring high-quality image synthesis.

Although the proposed method demands more computational resources due to the iterative nature of diffusion, its scalability in terms of resolution generalization suggests future enhancements could target runtime efficiency, perhaps through techniques like patch-based diffusion.

Conclusion

represents a compelling contribution by reimagining visual decoding through iterative denoising, integrating diffusion processes seamlessly into autoencoders. This advancement presents an opportunity for further investigation into iterative generation during decoding, potentially inspiring new methodologies in visual autoencoding and compression in generative modeling.

HackerNews