Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GNN-SL: Sequence Labeling Based on Nearest Examples via GNN (2212.02017v2)

Published 5 Dec 2022 in cs.CL

Abstract: To better handle long-tail cases in the sequence labeling (SL) task, in this work, we introduce graph neural networks sequence labeling (GNN-SL), which augments the vanilla SL model output with similar tagging examples retrieved from the whole training set. Since not all the retrieved tagging examples benefit the model prediction, we construct a heterogeneous graph, and leverage graph neural networks (GNNs) to transfer information between the retrieved tagging examples and the input word sequence. The augmented node which aggregates information from neighbors is used to do prediction. This strategy enables the model to directly acquire similar tagging examples and improves the general quality of predictions. We conduct a variety of experiments on three typical sequence labeling tasks: Named Entity Recognition (NER), Part of Speech Tagging (POS), and Chinese Word Segmentation (CWS) to show the significant performance of our GNN-SL. Notably, GNN-SL achieves SOTA results of 96.9 (+0.2) on PKU, 98.3 (+0.4) on CITYU, 98.5 (+0.2) on MSR, and 96.9 (+0.2) on AS for the CWS task, and results comparable to SOTA performances on NER datasets, and POS datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shuhe Wang (18 papers)
  2. Yuxian Meng (37 papers)
  3. Rongbin Ouyang (6 papers)
  4. Jiwei Li (137 papers)
  5. Tianwei Zhang (199 papers)
  6. Lingjuan Lyu (131 papers)
  7. Guoyin Wang (108 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.