Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model (2010.06040v2)

Published 12 Oct 2020 in cs.CL and cs.AI

Abstract: Masked LLM (MLM) framework has been widely adopted for self-supervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the Hamming distance between two different masks (given a certain text sequence). To reduce the variance due to the sampling of masks, we propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. Thereafter, the tokens within one segment are masked for training. We prove, from a theoretical perspective, that the gradients derived from this new masking schema have a smaller variance and can lead to more efficient self-supervised training. We conduct extensive experiments on both continual pre-training and general pre-training from scratch. Empirical results confirm that this new masking strategy can consistently outperform standard random masking. Detailed efficiency analysis and ablation studies further validate the advantages of our fully-explored masking strategy under the MLM framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mingzhi Zheng (2 papers)
  2. Dinghan Shen (34 papers)
  3. Yelong Shen (83 papers)
  4. Weizhu Chen (128 papers)
  5. Lin Xiao (82 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.