Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition (2003.06044v1)

Published 12 Mar 2020 in cs.CL

Abstract: Dialogue act recognition is a fundamental task for an intelligent dialogue system. Previous work models the whole dialog to predict dialog acts, which may bring the noise from unrelated sentences. In this work, we design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information. We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances. Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism to analyze the effect of dialog length and context padding length under online and offline settings. The experiment shows that our method achieves promising performance on two datasets: Switchboard Dialogue Act and DailyDialog with the accuracy of 80.34\% and 85.81\% respectively. Visualization of the attention weights shows that our method can learn the context dependency between utterances explicitly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhigang Dai (4 papers)
  2. Jinhua Fu (1 paper)
  3. Qile Zhu (8 papers)
  4. Hengbin Cui (5 papers)
  5. Yuan Qi (85 papers)
  6. Xiaolong Li (107 papers)
Citations (10)