Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models (2205.11169v2)

Published 23 May 2022 in cs.CV, cs.AI, and cs.CL

Abstract: Vision-language pre-training (VLP) has shown impressive performance on a wide range of cross-modal tasks, where VLP models without reliance on object detectors are becoming the mainstream due to their superior computation efficiency and competitive performance. However, the removal of object detectors also deprives the capability of VLP models in explicit object modeling, which is essential to various position-sensitive vision-language (VL) tasks, such as referring expression comprehension and visual commonsense reasoning. To address the challenge, we introduce PEVL that enhances the pre-training and prompt tuning of VLP models with explicit object position modeling. Specifically, PEVL reformulates discretized object positions and language in a unified LLMing framework, which facilitates explicit VL alignment during pre-training, and also enables flexible prompt tuning for various downstream tasks. We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs. We make the data and code for this paper publicly available at https://github.com/thunlp/PEVL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuan Yao (292 papers)
  2. Qianyu Chen (18 papers)
  3. Ao Zhang (45 papers)
  4. Wei Ji (202 papers)
  5. Zhiyuan Liu (433 papers)
  6. Tat-Seng Chua (359 papers)
  7. Maosong Sun (337 papers)
Citations (36)