MELM: Data Augmentation with Masked Entity Language Modeling for Low-Resource NER (2108.13655v2)
Abstract: Data augmentation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory performance. In this work, we propose Masked Entity LLMing (MELM) as a novel data augmentation framework for low-resource NER. To alleviate the token-label misalignment issue, we explicitly inject NER labels into sentence context, and thus the fine-tuned MELM is able to predict masked entity tokens by explicitly conditioning on their labels. Thereby, MELM generates high-quality augmented data with novel entities, which provides rich entity regularity knowledge and boosts NER performance. When training data from multiple languages are available, we also integrate MELM with code-mixing for further improvement. We demonstrate the effectiveness of MELM on monolingual, cross-lingual and multilingual NER across various low-resource levels. Experimental results show that our MELM presents substantial improvement over the baseline methods.
- Ran Zhou (35 papers)
- Xin Li (980 papers)
- Ruidan He (11 papers)
- Lidong Bing (144 papers)
- Erik Cambria (136 papers)
- Luo Si (73 papers)
- Chunyan Miao (145 papers)