Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Should You Mask 15% in Masked Language Modeling? (2202.08005v3)

Published 16 Feb 2022 in cs.CL and cs.LG

Abstract: Masked LLMs (MLMs) conventionally mask 15% of tokens due to the belief that more masking would leave insufficient context to learn good representations; this masking rate has been widely used, regardless of model sizes or masking strategies. In this work, we revisit this important choice of MLM pre-training. We first establish that 15% is not universally optimal, and larger models should adopt a higher masking rate. Specifically, we find that masking 40% outperforms 15% for BERT-large size models on GLUE and SQuAD. Interestingly, an extremely high masking rate of 80% can still preserve 95% fine-tuning performance and most of the accuracy in linguistic probing, challenging the conventional wisdom about the role of the masking rate. We then examine the interplay between masking rates and masking strategies and find that uniform masking requires a higher masking rate compared to sophisticated masking strategies such as span or PMI masking. Finally, we argue that increasing the masking rate has two distinct effects: it leads to more corruption, which makes the prediction task more difficult; it also enables more predictions, which benefits optimization. Using this framework, we revisit BERT's 80-10-10 corruption strategy. Together, our results contribute to a better understanding of MLM pre-training.

Analyzing Masking Rates in Masked LLMing

The paper "Should You Mask 15% in Masked LLMing?" by Wettig et al. challenges conventional assumptions about the optimal choice of masking rates in masked LLMs (MLMs). The traditional practice of masking 15% of tokens has been pervasive across various sizes and strategies of MLMs, largely based on the belief that more masking would hinder the ability to learn effective representations, and less masking would reduce training efficiency. This paper revisits the established norms and provides evidence that larger models might benefit from higher masking rates, contrary to the prevalent strategy.

The authors conducted an array of experiments using BERT-large models, fine-tuning on popular benchmarks such as GLUE and SQuAD, and found that a higher masking rate of 40% improved performance compared to the 15% baseline. Remarkably, even extremely high masking rates like 80% retained up to 95% of downstream task performance, which presents a disruptive insight into MLM training paradigms by suggesting that model capacity might track mask rate efficacy more closely than previously assumed. This observation raises questions about the traditional justifications for limiting mask rates, implying that recent advances in model architectures and capacities might facilitate learning under more challenging contexts.

Delving deeper into masking strategies, the paper establishes that different strategies demand different optimal rates. Uniform masking, which is simpler and more commonplace, benefits from higher masking rates compared to advanced strategies like span or PMI masking. With higher mask rates, uniform masking can cover more token spans and n-grams, thus inadvertently simulating the effects of more sophisticated strategies with less complexity.

Another pivotal contribution of the paper is the conceptual disentangling of masking into corruption and prediction rates. The corruption rate refers to the percentage of context tokens altered or removed, while the prediction rate denotes the percentage of tokens the model attempts to predict based on the corrupted context. Through ablation experiments, the paper illustrates how high prediction rates generate more learning signals and are advantageous, whereas high corruption rates increase task difficulty, leading to inadequate context for masking predictions. This analysis challenges our understanding of the masking mechanism, suggesting a nuanced interaction where prediction benefits might outweigh corruption disadvantages.

The authors also critically reevaluate the 80-10-10 corruption strategy invented by BERT, wherein a portion of masked tokens are replaced by the original or random tokens during pre-training. Empirically, they found that this strategy does not outperform models trained solely with mask tokens, suggesting that same-token substitutions and random corruptions may not be essential when MLMs are fine-tuned on complete, corruption-free contexts.

In summary, this paper provides compelling insights into MLM pre-training, suggesting that higher masking rates can be beneficial, particularly for larger models, and that the trade-offs between corruption and prediction rates should be considered in optimizing MLM training strategies. These findings have practical implications for improving training efficiency and theoretical implications for our understanding of LLM training processes. Future research could continue to explore the boundaries of masking practices, possibly leading to more optimized models that leverage higher masking rates without sacrificing performance accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alexander Wettig (21 papers)
  2. Tianyu Gao (35 papers)
  3. Zexuan Zhong (17 papers)
  4. Danqi Chen (84 papers)
Citations (139)