Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DSTC8-AVSD: Multimodal Semantic Transformer Network with Retrieval Style Word Generator (2004.08299v1)

Published 1 Apr 2020 in cs.CL and cs.LG

Abstract: Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog. Existing systems for this task employ the transformers or recurrent neural network-based architecture with the encoder-decoder framework. Even though these techniques show superior performance for this task, they have significant limitations: the model easily overfits only to memorize the grammatical patterns; the model follows the prior distribution of the vocabularies in a dataset. To alleviate the problems, we propose a Multimodal Semantic Transformer Network. It employs a transformer-based architecture with an attention-based word embedding layer that generates words by querying word embeddings. With this design, our model keeps considering the meaning of the words at the generation stage. The empirical results demonstrate the superiority of our proposed model that outperforms most of the previous works for the AVSD task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hwanhee Lee (36 papers)
  2. Seunghyun Yoon (64 papers)
  3. Franck Dernoncourt (161 papers)
  4. Doo Soon Kim (20 papers)
  5. Trung Bui (79 papers)
  6. Kyomin Jung (76 papers)
Citations (15)