Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Visual Relation Priors for Image-Text Matching and Image Captioning with Neural Scene Graph Generators (1909.09953v1)

Published 22 Sep 2019 in cs.CV and cs.AI

Abstract: Grounding language to visual relations is critical to various language-and-vision applications. In this work, we tackle two fundamental language-and-vision tasks: image-text matching and image captioning, and demonstrate that neural scene graph generators can learn effective visual relation features to facilitate grounding language to visual relations and subsequently improve the two end applications. By combining relation features with the state-of-the-art models, our experiments show significant improvement on the standard Flickr30K and MSCOCO benchmarks. Our experimental results and analysis show that relation features improve downstream models' capability of capturing visual relations in end vision-and-language applications. We also demonstrate the importance of learning scene graph generators with visually relevant relations to the effectiveness of relation features.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kuang-Huei Lee (23 papers)
  2. Hamid Palangi (52 papers)
  3. Xi Chen (1036 papers)
  4. Houdong Hu (14 papers)
  5. Jianfeng Gao (344 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.