Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Visual Grounding with Language Scene Graphs (1906.03561v2)

Published 9 Jun 2019 in cs.CV

Abstract: Visual grounding is a task to ground referring expressions in images, e.g., localize "the white truck in front of the yellow one". To resolve this task fundamentally, the model should first find out the contextual objects (e.g., the "yellow" truck) and then exploit them to disambiguate the referent from other similar objects by using the attributes and relationships (e.g., "white", "yellow", "in front of"). However, due to the lack of annotations on contextual objects and their relationships, existing methods degenerate the above joint grounding process into a holistic association between the expression and regions, thus suffering from unsatisfactory performance and limited interpretability. In this paper, we alleviate the missing-annotation problem and enable the joint reasoning by leveraging the language scene graph which covers both labeled referent and unlabeled contexts (other objects, attributes, and relationships). Specifically, the language scene graph is a graphical representation where the nodes are objects with attributes and the edges are relationships. We construct a factor graph based on it and then perform marginalization over the graph, such that we can ground both referent and contexts on corresponding image regions to achieve the joint visual grounding (JVG). Experimental results demonstrate that the proposed approach is effective and interpretable, e.g., on three benchmarks, it outperforms the state-of-the-art methods while offers a complete grounding of all the objects mentioned in the referring expression.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daqing Liu (27 papers)
  2. Hanwang Zhang (161 papers)
  3. Zheng-Jun Zha (144 papers)
  4. Meng Wang (1063 papers)
  5. Qianru Sun (65 papers)
Citations (5)