Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Guiding attention in Sequence-to-sequence models for Dialogue Act prediction (2002.08801v2)

Published 20 Feb 2020 in cs.CL and cs.LG

Abstract: The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pierre Colombo (48 papers)
  2. Emile Chapuis (30 papers)
  3. Matteo Manica (28 papers)
  4. Emmanuel Vignon (6 papers)
  5. Giovanna Varni (8 papers)
  6. Chloe Clavel (9 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.