Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models (2305.17235v2)

Published 26 May 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Attention-based vision models, such as Vision Transformer (ViT) and its variants, have shown promising performance in various computer vision tasks. However, these emerging architectures suffer from large model sizes and high computational costs, calling for efficient model compression solutions. To date, pruning ViTs has been well studied, while other compression strategies that have been widely applied in CNN compression, e.g., model factorization, is little explored in the context of ViT compression. This paper explores an efficient method for compressing vision transformers to enrich the toolset for obtaining compact attention-based vision models. Based on the new insight on the multi-head attention layer, we develop a highly efficient ViT compression solution, which outperforms the state-of-the-art pruning methods. For compressing DeiT-small and DeiT-base models on ImageNet, our proposed approach can achieve 0.45% and 0.76% higher top-1 accuracy even with fewer parameters. Our finding can also be applied to improve the customization efficiency of text-to-image diffusion models, with much faster training (up to $2.6\times$ speedup) and lower extra storage cost (up to $1927.5\times$ reduction) than the existing works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jinqi Xiao (8 papers)
  2. Miao Yin (25 papers)
  3. Yu Gong (46 papers)
  4. Xiao Zang (6 papers)
  5. Jian Ren (97 papers)
  6. Bo Yuan (151 papers)
Citations (7)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub