Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Latent Variable Models for Visual Question Answering (2101.06399v2)

Published 16 Jan 2021 in cs.CV, cs.AI, and cs.CL

Abstract: Current work on Visual Question Answering (VQA) explore deterministic approaches conditioned on various types of image and question features. We posit that, in addition to image and question pairs, other modalities are useful for teaching machine to carry out question answering. Hence in this paper, we propose latent variable models for VQA where extra information (e.g. captions and answer categories) are incorporated as latent variables, which are observed during training but in turn benefit question-answering performance at test time. Experiments on the VQA v2.0 benchmarking dataset demonstrate the effectiveness of our proposed models: they improve over strong baselines, especially those that do not rely on extensive language-vision pre-training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zixu Wang (26 papers)
  2. Yishu Miao (19 papers)
  3. Lucia Specia (68 papers)
Citations (3)