Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A non-hierarchical attention network with modality dropout for textual response generation in multimodal dialogue systems (2110.09702v2)

Published 19 Oct 2021 in cs.CL

Abstract: Existing text- and image-based multimodal dialogue systems use the traditional Hierarchical Recurrent Encoder-Decoder (HRED) framework, which has an utterance-level encoder to model utterance representation and a context-level encoder to model context representation. Although pioneer efforts have shown promising performances, they still suffer from the following challenges: (1) the interaction between textual features and visual features is not fine-grained enough. (2) the context representation can not provide a complete representation for the context. To address the issues mentioned above, we propose a non-hierarchical attention network with modality dropout, which abandons the HRED framework and utilizes attention modules to encode each utterance and model the context representation. To evaluate our proposed model, we conduct comprehensive experiments on a public multimodal dialogue dataset. Automatic and human evaluation demonstrate that our proposed model outperforms the existing methods and achieves state-of-the-art performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rongyi Sun (3 papers)
  2. Borun Chen (3 papers)
  3. Qingyu Zhou (28 papers)
  4. Yinghui Li (65 papers)
  5. Hai-Tao Zheng (94 papers)
  6. Yunbo Cao (43 papers)
Citations (9)