2000 character limit reached
ToDo: Token Downsampling for Efficient Generation of High-Resolution Images (2402.13573v3)
Published 21 Feb 2024 in cs.CV, cs.AI, and cs.LG
Abstract: Attention mechanism has been crucial for image diffusion models, however, their quadratic computational complexity limits the sizes of images we can process within reasonable time and memory constraints. This paper investigates the importance of dense attention in generative image models, which often contain redundant features, making them suitable for sparser attention mechanisms. We propose a novel training-free method ToDo that relies on token downsampling of key and value tokens to accelerate Stable Diffusion inference by up to 2x for common sizes and up to 4.5x or more for high resolutions like 2048x2048. We demonstrate that our approach outperforms previous methods in balancing efficient throughput and fidelity.
- Isaac Bankman. Handbook of medical image processing and analysis. Elsevier, 2008.
- Token merging for fast stable diffusion. CVPR Workshop on Efficient Deep Learning for Computer Vision, 2023.
- Token merging: Your ViT but faster. In International Conference on Learning Representations, 2023.
- Arelu: Attention-based rectified linear unit, 2020.
- Rethinking attention with performers, 2022.
- FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022.
- Rafael C Gonzalez. Digital image processing. Pearson education india, 2009.
- Neighborhood attention transformer, 2023.
- Transformers are rnns: Fast autoregressive transformers with linear attention, 2020.
- Transformers in vision: A survey. ACM Computing Surveys, 54(10s):1–41, January 2022.
- Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023.
- High-resolution image synthesis with latent diffusion models, 2021.
- U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
- Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
- Denoising diffusion implicit models. arXiv:2010.02502, October 2020.
- Huggingface Diffusers Team. Speed up inference, 2024.
- Attention is all you need, 2023.
- Focal self-attention for local-global interactions in vision transformers. CoRR, abs/2107.00641, 2021.
- A survey on efficient training of transformers. arXiv preprint arXiv:2302.01107, 2023.
- Ethan Smith (27 papers)
- Nayan Saxena (8 papers)
- Aninda Saha (1 paper)