Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CK-Transformer: Commonsense Knowledge Enhanced Transformers for Referring Expression Comprehension (2302.09027v1)

Published 17 Feb 2023 in cs.CV, cs.AI, cs.CL, and cs.MM

Abstract: The task of multimodal referring expression comprehension (REC), aiming at localizing an image region described by a natural language expression, has recently received increasing attention within the research comminity. In this paper, we specifically focus on referring expression comprehension with commonsense knowledge (KB-Ref), a task which typically requires reasoning beyond spatial, visual or semantic information. We propose a novel framework for Commonsense Knowledge Enhanced Transformers (CK-Transformer) which effectively integrates commonsense knowledge into the representations of objects in an image, facilitating identification of the target objects referred to by the expressions. We conduct extensive experiments on several benchmarks for the task of KB-Ref. Our results show that the proposed CK-Transformer achieves a new state of the art, with an absolute improvement of 3.14% accuracy over the existing state of the art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhi Zhang (113 papers)
  2. Helen Yannakoudakis (32 papers)
  3. Xiantong Zhen (56 papers)
  4. Ekaterina Shutova (52 papers)