Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Co-Interactive Transformer for Joint Slot Filling and Intent Detection (2010.03880v3)

Published 8 Oct 2020 in cs.CL

Abstract: Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system. The two tasks are closely related and the information of one task can be utilized in the other task. Previous studies either model the two tasks separately or only consider the single information flow from intent to slot. None of the prior approaches model the bidirectional connection between the two tasks simultaneously. In this paper, we propose a Co-Interactive Transformer to consider the cross-impact between the two tasks. Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks. In addition, the proposed co-interactive module can be stacked to incrementally enhance each other with mutual features. The experimental results on two public datasets (SNIPS and ATIS) show that our model achieves the state-of-the-art performance with considerable improvements (+3.4% and +0.9% on overall acc). Extensive experiments empirically verify that our model successfully captures the mutual interaction knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Libo Qin (77 papers)
  2. Tailu Liu (1 paper)
  3. Wanxiang Che (152 papers)
  4. Bingbing Kang (1 paper)
  5. Sendong Zhao (31 papers)
  6. Ting Liu (329 papers)
Citations (111)

Summary

We haven't generated a summary for this paper yet.