Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Vision-and-Language Pre-training with Text-Relevant Image Patch Selection (2403.07883v1)

Published 11 Jan 2024 in cs.CV and cs.AI

Abstract: Vision Transformers (ViTs) have become increasingly popular in large-scale Vision and Language Pre-training (VLP) models. Although previous VLP research has demonstrated the efficacy of ViTs, these efforts still struggle with computational inefficiencies caused by lengthy visual sequences. To address this challenge, we introduce an efficient VLP approach called TRIPS, which stands for Text-Relevant Image Patch Selection. TRIPS progressively reduces the visual sequence using a text-guided patch-selection layer in the visual backbone, thereby accelerating both training and inference processes. This patch-selection layer dynamically computes text-dependent visual attention, enabling it to identify attentive image tokens with text guidance and fuse inattentive ones in an end-to-end fashion. Importantly, TRIPS does not add any extra parameters and generalizes to most ViT-based VLP models. We incorporate TRIPS into three representative VLP models covering single-stream, dual-stream, and generative paradigms, and conduct extensive experiments on five widely-used multi-modal benchmark datasets. Our experimental results reveal that TRIPS delivers a 40% speedup, while maintaining competitive or superior performance on downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wei Ye (110 papers)
  2. Chaoya Jiang (15 papers)
  3. Haiyang Xu (67 papers)
  4. Chenhao Ye (1 paper)
  5. Chenliang Li (92 papers)
  6. Ming Yan (190 papers)
  7. Shikun Zhang (82 papers)
  8. Songhang Huang (1 paper)
  9. Fei Huang (408 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com