Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representation Deficiency in Masked Language Modeling (2302.02060v2)

Published 4 Feb 2023 in cs.CL and cs.LG

Abstract: Masked LLMing (MLM) has been one of the most prominent approaches for pretraining bidirectional text encoders due to its simplicity and effectiveness. One notable concern about MLM is that the special $\texttt{[MASK]}$ symbol causes a discrepancy between pretraining data and downstream data as it is present only in pretraining but not in fine-tuning. In this work, we offer a new perspective on the consequence of such a discrepancy: We demonstrate empirically and theoretically that MLM pretraining allocates some model dimensions exclusively for representing $\texttt{[MASK]}$ tokens, resulting in a representation deficiency for real tokens and limiting the pretrained model's expressiveness when it is adapted to downstream data without $\texttt{[MASK]}$ tokens. Motivated by the identified issue, we propose MAE-LM, which pretrains the Masked Autoencoder architecture with MLM where $\texttt{[MASK]}$ tokens are excluded from the encoder. Empirically, we show that MAE-LM improves the utilization of model dimensions for real token representations, and MAE-LM consistently outperforms MLM-pretrained models across different pretraining settings and model sizes when fine-tuned on the GLUE and SQuAD benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yu Meng (92 papers)
  2. Jitin Krishnan (7 papers)
  3. Sinong Wang (45 papers)
  4. Qifan Wang (129 papers)
  5. Yuning Mao (34 papers)
  6. Han Fang (61 papers)
  7. Marjan Ghazvininejad (33 papers)
  8. Jiawei Han (263 papers)
  9. Luke Zettlemoyer (225 papers)
Citations (3)