Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Vision-and-Language Reasoning via Spatial Relations Modeling (2311.05298v1)

Published 9 Nov 2023 in cs.CV

Abstract: Visual commonsense reasoning (VCR) is a challenging multi-modal task, which requires high-level cognition and commonsense reasoning ability about the real world. In recent years, large-scale pre-training approaches have been developed and promoted the state-of-the-art performance of VCR. However, the existing approaches almost employ the BERT-like objectives to learn multi-modal representations. These objectives motivated from the text-domain are insufficient for the excavation on the complex scenario of visual modality. Most importantly, the spatial distribution of the visual objects is basically neglected. To address the above issue, we propose to construct the spatial relation graph based on the given visual scenario. Further, we design two pre-training tasks named object position regression (OPR) and spatial relation classification (SRC) to learn to reconstruct the spatial relation graph respectively. Quantitative analysis suggests that the proposed method can guide the representations to maintain more spatial context and facilitate the attention on the essential visual regions for reasoning. We achieve the state-of-the-art results on VCR and two other vision-and-language reasoning tasks VQA, and NLVR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Cheng Yang (168 papers)
  2. Rui Xu (198 papers)
  3. Ye Guo (50 papers)
  4. Peixiang Huang (11 papers)
  5. Yiru Chen (10 papers)
  6. Wenkui Ding (13 papers)
  7. Zhongyuan Wang (105 papers)
  8. Hong Zhou (61 papers)
Citations (3)