Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Relation Alignment for Calibrated Cross-modal Retrieval (2105.13868v2)

Published 28 May 2021 in cs.CL, cs.CV, and cs.IR

Abstract: Despite the achievements of large-scale multimodal pre-training approaches, cross-modal retrieval, e.g., image-text retrieval, remains a challenging task. To bridge the semantic gap between the two modalities, previous studies mainly focus on word-region alignment at the object level, lacking the matching between the linguistic relation among the words and the visual relation among the regions. The neglect of such relation consistency impairs the contextualized representation of image-text pairs and hinders the model performance and the interpretability. In this paper, we first propose a novel metric, Intra-modal Self-attention Distance (ISD), to quantify the relation consistency by measuring the semantic distance between linguistic and visual relations. In response, we present Inter-modal Alignment on Intra-modal Self-attentions (IAIS), a regularized training method to optimize the ISD and calibrate intra-modal self-attentions from the two modalities mutually via inter-modal alignment. The IAIS regularizer boosts the performance of prevailing models on Flickr30k and MS COCO datasets by a considerable margin, which demonstrates the superiority of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shuhuai Ren (30 papers)
  2. Junyang Lin (99 papers)
  3. Guangxiang Zhao (17 papers)
  4. Rui Men (21 papers)
  5. An Yang (32 papers)
  6. Jingren Zhou (198 papers)
  7. Xu Sun (194 papers)
  8. Hongxia Yang (130 papers)
Citations (31)