Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Real-time Global Inference Network for One-stage Referring Expression Comprehension (1912.03478v1)

Published 7 Dec 2019 in cs.CV

Abstract: Referring Expression Comprehension (REC) is an emerging research spot in computer vision, which refers to detecting the target region in an image given an text description. Most existing REC methods follow a multi-stage pipeline, which are computationally expensive and greatly limit the application of REC. In this paper, we propose a one-stage model towards real-time REC, termed Real-time Global Inference Network (RealGIN). RealGIN addresses the diversity and complexity issues in REC with two innovative designs: the Adaptive Feature Selection (AFS) and the Global Attentive ReAsoNing unit (GARAN). AFS adaptively fuses features at different semantic levels to handle the varying content of expressions. GARAN uses the textual feature as a pivot to collect expression-related visual information from all regions, and thenselectively diffuse such information back to all regions, which provides sufficient context for modeling the complex linguistic conditions in expressions. On five benchmark datasets, i.e., RefCOCO, RefCOCO+, RefCOCOg, ReferIt and Flickr30k, the proposed RealGIN outperforms most prior works and achieves very competitive performances against the most advanced method, i.e., MAttNet. Most importantly, under the same hardware, RealGIN can boost the processing speed by about 10 times over the existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiyi Zhou (38 papers)
  2. Rongrong Ji (315 papers)
  3. Gen Luo (32 papers)
  4. Xiaoshuai Sun (91 papers)
  5. Jinsong Su (96 papers)
  6. Xinghao Ding (66 papers)
  7. Qi Tian (314 papers)
  8. Chia-Wen Lin (79 papers)
Citations (54)

Summary

We haven't generated a summary for this paper yet.