Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uniform Masking Prevails in Vision-Language Pretraining (2212.05195v1)

Published 10 Dec 2022 in cs.LG

Abstract: Masked LLMing (MLM) has proven to be an essential component of Vision-Language (VL) pretraining. To implement MLM, the researcher must make two design choices: the masking strategy, which determines which tokens to mask, and the masking rate, which determines how many tokens to mask. Previous work has focused primarily on the masking strategy while setting the masking rate at a default of 15\%. In this paper, we show that increasing this masking rate improves downstream performance while simultaneously reducing performance gap among different masking strategies, rendering the uniform masking strategy competitive to other more complex ones. Surprisingly, we also discover that increasing the masking rate leads to gains in Image-Text Matching (ITM) tasks, suggesting that the role of MLM goes beyond LLMing in VL pretraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Siddharth Verma (7 papers)
  2. Yuchen Lu (17 papers)
  3. Rui Hou (56 papers)
  4. Hanchao Yu (17 papers)
  5. Nicolas Ballas (49 papers)
  6. Madian Khabsa (38 papers)
  7. Amjad Almahairi (19 papers)