Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers (2007.03848v2)

Published 8 Jul 2020 in cs.CV and cs.CL

Abstract: Given an input video, its associated audio, and a brief caption, the audio-visual scene aware dialog (AVSD) task requires an agent to indulge in a question-answer dialog with a human about the audio-visual content. This task thus poses a challenging multi-modal representation learning and reasoning scenario, advancements into which could influence several human-machine interaction applications. To solve this task, we introduce a semantics-controlled multi-modal shuffled Transformer reasoning framework, consisting of a sequence of Transformer modules, each taking a modality as input and producing representations conditioned on the input question. Our proposed Transformer variant uses a shuffling scheme on their multi-head outputs, demonstrating better regularization. To encode fine-grained visual information, we present a novel dynamic scene graph representation learning pipeline that consists of an intra-frame reasoning layer producing spatio-semantic graph representations for every frame, and an inter-frame aggregation module capturing temporal cues. Our entire pipeline is trained end-to-end. We present experiments on the benchmark AVSD dataset, both on answer generation and selection tasks. Our results demonstrate state-of-the-art performances on all evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shijie Geng (33 papers)
  2. Peng Gao (401 papers)
  3. Moitreya Chatterjee (16 papers)
  4. Chiori Hori (21 papers)
  5. Jonathan Le Roux (82 papers)
  6. Yongfeng Zhang (163 papers)
  7. Hongsheng Li (340 papers)
  8. Anoop Cherian (65 papers)
Citations (11)