Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MELM: Data Augmentation with Masked Entity Language Modeling for Low-Resource NER (2108.13655v2)

Published 31 Aug 2021 in cs.CL

Abstract: Data augmentation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory performance. In this work, we propose Masked Entity LLMing (MELM) as a novel data augmentation framework for low-resource NER. To alleviate the token-label misalignment issue, we explicitly inject NER labels into sentence context, and thus the fine-tuned MELM is able to predict masked entity tokens by explicitly conditioning on their labels. Thereby, MELM generates high-quality augmented data with novel entities, which provides rich entity regularity knowledge and boosts NER performance. When training data from multiple languages are available, we also integrate MELM with code-mixing for further improvement. We demonstrate the effectiveness of MELM on monolingual, cross-lingual and multilingual NER across various low-resource levels. Experimental results show that our MELM presents substantial improvement over the baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ran Zhou (35 papers)
  2. Xin Li (980 papers)
  3. Ruidan He (11 papers)
  4. Lidong Bing (144 papers)
  5. Erik Cambria (136 papers)
  6. Luo Si (73 papers)
  7. Chunyan Miao (145 papers)
Citations (78)