Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation (2406.02540v2)

Published 4 Jun 2024 in cs.CV

Abstract: Diffusion transformers (DiTs) have exhibited remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions. However, larger model sizes and multi-frame processing for video generation lead to increased computational and memory costs, posing challenges for practical deployment on edge devices. Post-Training Quantization (PTQ) is an effective method for reducing memory costs and computational complexity. When quantizing diffusion transformers, we find that applying existing diffusion quantization methods designed for U-Net faces challenges in preserving quality. After analyzing the major challenges for quantizing diffusion transformers, we design an improved quantization scheme: "ViDiT-Q": Video and Image Diffusion Transformer Quantization) to address these issues. Furthermore, we identify highly sensitive layers and timesteps hinder quantization for lower bit-widths. To tackle this, we improve ViDiT-Q with a novel metric-decoupled mixed-precision quantization method (ViDiT-Q-MP). We validate the effectiveness of ViDiT-Q across a variety of text-to-image and video models. While baseline quantization methods fail at W8A8 and produce unreadable content at W4A8, ViDiT-Q achieves lossless W8A8 quantization. ViDiTQ-MP achieves W4A8 with negligible visual quality degradation, resulting in a 2.5x memory optimization and a 1.5x latency speedup.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Tianchen Zhao (27 papers)
  2. Tongcheng Fang (4 papers)
  3. Enshu Liu (9 papers)
  4. Widyadewi Soedarmadji (4 papers)
  5. Shiyao Li (17 papers)
  6. Zinan Lin (42 papers)
  7. Guohao Dai (51 papers)
  8. Shengen Yan (26 papers)
  9. Huazhong Yang (80 papers)
  10. Xuefei Ning (52 papers)
  11. Yu Wang (939 papers)
  12. Rui Wan (8 papers)
Citations (6)