2000 character limit reached
InforMask: Unsupervised Informative Masking for Language Model Pretraining (2210.11771v1)
Published 21 Oct 2022 in cs.CL
Abstract: Masked LLMing is widely used for pretraining LLMs for natural language understanding (NLU). However, random masking is suboptimal, allocating an equal masking rate for all tokens. In this paper, we propose InforMask, a new unsupervised masking strategy for training masked LLMs. InforMask exploits Pointwise Mutual Information (PMI) to select the most informative tokens to mask. We further propose two optimizations for InforMask to improve its efficiency. With a one-off preprocessing step, InforMask outperforms random masking and previously proposed masking strategies on the factual recall benchmark LAMA and the question answering benchmark SQuAD v1 and v2.
- Nafis Sadeq (6 papers)
- Canwen Xu (32 papers)
- Julian McAuley (238 papers)