Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction (2109.04726v3)

Published 10 Sep 2021 in cs.CL and cs.IR

Abstract: Deep neural models for named entity recognition (NER) have shown impressive results in overcoming label scarcity and generalizing to unseen entities by leveraging distant supervision and auxiliary information such as explanations. However, the costs of acquiring such additional information are generally prohibitive. In this paper, we present a novel two-stage framework (AutoTriggER) to improve NER performance by automatically generating and leveraging ``entity triggers'' which are human-readable cues in the text that help guide the model to make better decisions. Our framework leverages post-hoc explanation to generate rationales and strengthens a model's prior knowledge using an embedding interpolation technique. This approach allows models to exploit triggers to infer entity boundaries and types instead of solely memorizing the entity words themselves. Through experiments on three well-studied NER datasets, AutoTriggER shows strong label-efficiency, is capable of generalizing to unseen entities, and outperforms the RoBERTa-CRF baseline by nearly 0.5 F1 points on average.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dong-Ho Lee (30 papers)
  2. Ravi Kiran Selvam (3 papers)
  3. Sheikh Muhammad Sarwar (16 papers)
  4. Bill Yuchen Lin (72 papers)
  5. Fred Morstatter (64 papers)
  6. Jay Pujara (44 papers)
  7. Elizabeth Boschee (12 papers)
  8. James Allan (28 papers)
  9. Xiang Ren (194 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.