Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variational Context: Exploiting Visual and Textual Context for Grounding Referring Expressions (1907.03609v1)

Published 8 Jul 2019 in cs.CV

Abstract: We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., largest elephant standing behind baby elephant''. This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context -- visual attributes (e.g.,largest'', baby'') and relationships (e.g.,behind'') that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Specifically, our framework exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. In addition to reciprocity, our framework considers the semantic information of context, i.e., the referring expression can be reproduced based on the estimated context. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yulei Niu (32 papers)
  2. Hanwang Zhang (161 papers)
  3. Zhiwu Lu (51 papers)
  4. Shih-Fu Chang (131 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.