Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Position-Aware Self-Attention based Neural Sequence Labeling (1908.09128v2)

Published 24 Aug 2019 in cs.CL and cs.LG

Abstract: Sequence labeling is a fundamental task in natural language processing and has been widely studied. Recently, RNN-based sequence labeling models have increasingly gained attentions. Despite superior performance achieved by learning the long short-term (i.e., successive) dependencies, the way of sequentially processing inputs might limit the ability to capture the non-continuous relations over tokens within a sentence. To tackle the problem, we focus on how to effectively model successive and discrete dependencies of each token for enhancing the sequence labeling performance. Specifically, we propose an innovative attention-based model (called position-aware selfattention, i.e., PSA) as well as a well-designed self-attentional context fusion layer within a neural network architecture, to explore the positional information of an input sequence for capturing the latent relations among tokens. Extensive experiments on three classical tasks in sequence labeling domain, i.e., partof-speech (POS) tagging, named entity recognition (NER) and phrase chunking, demonstrate our proposed model outperforms the state-of-the-arts without any external knowledge, in terms of various metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Wei (426 papers)
  2. Zanbo Wang (4 papers)
  3. Xianling Mao (15 papers)
  4. Guangyou Zhou (4 papers)
  5. Pan Zhou (221 papers)
  6. Sheng Jiang (33 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.