Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Injecting Semantic Concepts into End-to-End Image Captioning (2112.05230v2)

Published 9 Dec 2021 in cs.CV and cs.CL

Abstract: Tremendous progress has been made in recent years in developing better image captioning models, yet most of them rely on a separate object detector to extract regional features. Recent vision-language studies are shifting towards the detector-free trend by leveraging grid representations for more flexible model training and faster inference speed. However, such development is primarily focused on image understanding tasks, and remains less investigated for the caption generation task. In this paper, we are concerned with a better-performing detector-free image captioning model, and propose a pure vision transformer-based image captioning model, dubbed as ViTCAP, in which grid representations are used without extracting the regional features. For improved performance, we introduce a novel Concept Token Network (CTN) to predict the semantic concepts and then incorporate them into the end-to-end captioning. In particular, the CTN is built on the basis of a vision transformer and is designed to predict the concept tokens through a classification task, from which the rich semantic information contained greatly benefits the captioning task. Compared with the previous detector-based models, ViTCAP drastically simplifies the architectures and at the same time achieves competitive performance on various challenging image captioning datasets. In particular, ViTCAP reaches 138.1 CIDEr scores on COCO-caption Karpathy-split, 93.8 and 108.6 CIDEr scores on nocaps, and Google-CC captioning datasets, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhiyuan Fang (19 papers)
  2. Jianfeng Wang (149 papers)
  3. Xiaowei Hu (54 papers)
  4. Lin Liang (11 papers)
  5. Zhe Gan (135 papers)
  6. Lijuan Wang (133 papers)
  7. Yezhou Yang (119 papers)
  8. Zicheng Liu (153 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.