Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compact Trilinear Interaction for Visual Question Answering (1909.11874v1)

Published 26 Sep 2019 in cs.CV

Abstract: In Visual Question Answering (VQA), answers have a great correlation with question meaning and visual contents. Thus, to selectively utilize image, question and answer information, we propose a novel trilinear interaction model which simultaneously learns high level associations between these three inputs. In addition, to overcome the interaction complexity, we introduce a multimodal tensor-based PARALIND decomposition which efficiently parameterizes trilinear interaction between the three inputs. Moreover, knowledge distillation is first time applied in Free-form Opened-ended VQA. It is not only for reducing the computational cost and required memory but also for transferring knowledge from trilinear interaction model to bilinear interaction model. The extensive experiments on benchmarking datasets TDIUC, VQA-2.0, and Visual7W show that the proposed compact trilinear interaction model achieves state-of-the-art results when using a single model on all three datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tuong Do (20 papers)
  2. Thanh-Toan Do (92 papers)
  3. Huy Tran (30 papers)
  4. Erman Tjiputra (21 papers)
  5. Quang D. Tran (20 papers)
Citations (57)

Summary

We haven't generated a summary for this paper yet.