Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models (2305.15033v2)

Published 24 May 2023 in cs.CL

Abstract: Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-LLMs (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications. Moreover, the degree of redundancy in token representations and model parameters, such as attention heads, varies significantly for different inputs. In light of the challenges, we propose SmartTrim, an adaptive acceleration framework for VLMs, which adjusts the computational overhead per instance. Specifically, we integrate lightweight modules into the original backbone to identify and prune redundant token representations and attention heads within each layer. Furthermore, we devise a self-distillation strategy to enhance the consistency between the predictions of the pruned model and its fully-capacity counterpart. Experimental results across various vision-language tasks consistently demonstrate that SmartTrim accelerates the original model by 2-3 times with minimal performance degradation, highlighting the effectiveness and efficiency compared to previous approaches. Code will be available at https://github.com/kugwzk/SmartTrim.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Zekun Wang (50 papers)
  2. Jingchang Chen (10 papers)
  3. Wangchunshu Zhou (73 papers)
  4. Ming Liu (421 papers)
  5. Bing Qin (186 papers)
  6. Haichao Zhu (9 papers)
  7. Jiafeng Liang (8 papers)
  8. Liping Shan (3 papers)
  9. Dongliang Xu (19 papers)
  10. Qing Yang (138 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.