Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

More Than Just Attention: Improving Cross-Modal Attentions with Contrastive Constraints for Image-Text Matching (2105.09597v3)

Published 20 May 2021 in cs.CV

Abstract: Cross-modal attention mechanisms have been widely applied to the image-text matching task and have achieved remarkable improvements thanks to its capability of learning fine-grained relevance across different modalities. However, the cross-modal attention models of existing methods could be sub-optimal and inaccurate because there is no direct supervision provided during the training process. In this work, we propose two novel training strategies, namely Contrastive Content Re-sourcing (CCR) and Contrastive Content Swapping (CCS) constraints, to address such limitations. These constraints supervise the training of cross-modal attention models in a contrastive learning manner without requiring explicit attention annotations. They are plug-in training strategies and can be easily integrated into existing cross-modal attention models. Additionally, we introduce three metrics including Attention Precision, Recall, and F1-Score to quantitatively measure the quality of learned attention models. We evaluate the proposed constraints by incorporating them into four state-of-the-art cross-modal attention-based image-text matching models. Experimental results on both Flickr30k and MS-COCO datasets demonstrate that integrating these constraints improves the model performance in terms of both retrieval performance and attention metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuxiao Chen (66 papers)
  2. Jianbo Yuan (33 papers)
  3. Long Zhao (64 papers)
  4. Tianlang Chen (24 papers)
  5. Rui Luo (88 papers)
  6. Larry Davis (41 papers)
  7. Dimitris N. Metaxas (84 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.