Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Learning for Weakly Supervised Phrase Grounding (2006.09920v3)

Published 17 Jun 2020 in cs.CV, cs.CL, cs.LG, and stat.ML

Abstract: Phrase grounding, the problem of associating image regions to caption words, is a crucial component of vision-language tasks. We show that phrase grounding can be learned by optimizing word-region attention to maximize a lower bound on mutual information between images and caption words. Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions. A key idea is to construct effective negative captions for learning through LLM guided word substitutions. Training with our negatives yields a $\sim10\%$ absolute gain in accuracy over randomly-sampled negatives from the training data. Our weakly supervised phrase grounding model trained on COCO-Captions shows a healthy gain of $5.7\%$ to achieve $76.7\%$ accuracy on Flickr30K Entities benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tanmay Gupta (23 papers)
  2. Arash Vahdat (69 papers)
  3. Gal Chechik (110 papers)
  4. Xiaodong Yang (101 papers)
  5. Jan Kautz (215 papers)
  6. Derek Hoiem (50 papers)
Citations (133)
X Twitter Logo Streamline Icon: https://streamlinehq.com