Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VU-BERT: A Unified framework for Visual Dialog (2202.10787v1)

Published 22 Feb 2022 in cs.CL, cs.AI, cs.CV, and cs.LG

Abstract: The visual dialog task attempts to train an agent to answer multi-turn questions given an image, which requires the deep understanding of interactions between the image and dialog history. Existing researches tend to employ the modality-specific modules to model the interactions, which might be troublesome to use. To fill in this gap, we propose a unified framework for image-text joint embedding, named VU-BERT, and apply patch projection to obtain vision embedding firstly in visual dialog tasks to simplify the model. The model is trained over two tasks: masked LLMing and next utterance retrieval. These tasks help in learning visual concepts, utterances dependence, and the relationships between these two modalities. Finally, our VU-BERT achieves competitive performance (0.7287 NDCG scores) on VisDial v1.0 Datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tong Ye (34 papers)
  2. Shijing Si (32 papers)
  3. Jianzong Wang (144 papers)
  4. Rui Wang (996 papers)
  5. Ning Cheng (96 papers)
  6. Jing Xiao (267 papers)
Citations (5)