Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BUS:Efficient and Effective Vision-language Pre-training with Bottom-Up Patch Summarization (2307.08504v2)

Published 17 Jul 2023 in cs.CV

Abstract: Vision Transformer (ViT) based Vision-Language Pre-training (VLP) models have demonstrated impressive performance in various tasks. However, the lengthy visual token sequences fed into ViT can lead to training inefficiency and ineffectiveness. Existing efforts address the challenge by either bottom-level patch extraction in the ViT backbone or top-level patch abstraction outside, not balancing training efficiency and effectiveness well. Inspired by text summarization in natural language processing, we propose a Bottom-Up Patch Summarization approach named BUS, coordinating bottom-level extraction and top-level abstraction to learn a concise summary of lengthy visual token sequences efficiently. Specifically, We incorporate a Text-Semantics-Aware Patch Selector (TSPS) into the ViT backbone to perform a coarse-grained visual token extraction and then attach a flexible Transformer-based Patch Abstraction Decoder (PAD) upon the backbone for top-level visual abstraction. This bottom-up collaboration enables our BUS to yield high training efficiency while maintaining or even improving effectiveness. We evaluate our approach on various visual-language understanding and generation tasks and show competitive downstream task performance while boosting the training efficiency by 50\%. Additionally, our model achieves state-of-the-art performance on many downstream tasks by increasing input image resolution without increasing computational costs over baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Chaoya Jiang (15 papers)
  2. Haiyang Xu (67 papers)
  3. Wei Ye (110 papers)
  4. Qinghao Ye (31 papers)
  5. Chenliang Li (92 papers)
  6. Ming Yan (190 papers)
  7. Bin Bi (24 papers)
  8. Shikun Zhang (82 papers)
  9. Fei Huang (408 papers)
  10. Songfang Huang (51 papers)
Citations (7)