Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jointly Trained Sequential Labeling and Classification by Sparse Attention Neural Networks (1709.10191v1)

Published 28 Sep 2017 in cs.CL

Abstract: Sentence-level classification and sequential labeling are two fundamental tasks in language understanding. While these two tasks are usually modeled separately, in reality, they are often correlated, for example in intent classification and slot filling, or in topic classification and named-entity recognition. In order to utilize the potential benefits from their correlations, we propose a jointly trained model for learning the two tasks simultaneously via Long Short-Term Memory (LSTM) networks. This model predicts the sentence-level category and the word-level label sequence from the stepwise output hidden representations of LSTM. We also introduce a novel mechanism of "sparse attention" to weigh words differently based on their semantic relevance to sentence-level classification. The proposed method outperforms baseline models on ATIS and TREC datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mingbo Ma (32 papers)
  2. Kai Zhao (160 papers)
  3. Liang Huang (108 papers)
  4. Bing Xiang (74 papers)
  5. Bowen Zhou (141 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.