Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Error Decay Prediction to Overcome Practical Issues of Deep Active Learning for Named Entity Recognition (1911.07335v2)

Published 17 Nov 2019 in cs.CL, cs.LG, and stat.ML

Abstract: Existing deep active learning algorithms achieve impressive sampling efficiency on natural language processing tasks. However, they exhibit several weaknesses in practice, including (a) inability to use uncertainty sampling with black-box models, (b) lack of robustness to labeling noise, and (c) lack of transparency. In response, we propose a transparent batch active sampling framework by estimating the error decay curves of multiple feature-defined subsets of the data. Experiments on four named entity recognition (NER) tasks demonstrate that the proposed methods significantly outperform diversification-based methods for black-box NER taggers, and can make the sampling process more robust to labeling noise when combined with uncertainty-based methods. Furthermore, the analysis of experimental results sheds light on the weaknesses of different active sampling strategies, and when traditional uncertainty-based or diversification-based methods can be expected to work well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haw-Shiuan Chang (22 papers)
  2. Shankar Vembu (9 papers)
  3. Sunil Mohan (7 papers)
  4. Rheeya Uppaal (8 papers)
  5. Andrew McCallum (132 papers)
Citations (3)