Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video Features (1806.08409v2)

Published 21 Jun 2018 in cs.CL, cs.CV, cs.SD, and eess.AS

Abstract: Dialog systems need to understand dynamic visual scenes in order to have conversations with users about the objects and events around them. Scene-aware dialog systems for real-world applications could be developed by integrating state-of-the-art technologies from multiple research areas, including: end-to-end dialog technologies, which generate system responses using models trained from dialog data; visual question answering (VQA) technologies, which answer questions about images using learned image features; and video description technologies, in which descriptions/captions are generated from videos using multimodal information. We introduce a new dataset of dialogs about videos of human behaviors. Each dialog is a typed conversation that consists of a sequence of 10 question-and-answer(QA) pairs between two Amazon Mechanical Turk (AMT) workers. In total, we collected dialogs on roughly 9,000 videos. Using this new dataset for Audio Visual Scene-aware dialog (AVSD), we trained an end-to-end conversation model that generates responses in a dialog about a video. Our experiments demonstrate that using multimodal features that were developed for multimodal attention-based video description enhances the quality of generated dialog about dynamic scenes (videos). Our dataset, model code and pretrained models will be publicly available for a new Video Scene-Aware Dialog challenge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Chiori Hori (21 papers)
  2. Huda Alamri (5 papers)
  3. Jue Wang (203 papers)
  4. Gordon Wichern (51 papers)
  5. Takaaki Hori (41 papers)
  6. Anoop Cherian (65 papers)
  7. Tim K. Marks (22 papers)
  8. Vincent Cartillier (9 papers)
  9. Raphael Gontijo Lopes (8 papers)
  10. Abhishek Das (61 papers)
  11. Irfan Essa (91 papers)
  12. Dhruv Batra (160 papers)
  13. Devi Parikh (129 papers)
Citations (124)