Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Show, Tell and Discriminate: Image Captioning by Self-retrieval with Partially Labeled Data (1803.08314v3)

Published 22 Mar 2018 in cs.CV

Abstract: The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional approaches imitate the language structure patterns, thus tend to fall into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image captioning framework with a self-retrieval module as training guidance, which encourages generating discriminative captions. It brings unique advantages: (1) the self-retrieval guidance can act as a metric and an evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount of unlabeled images to boost captioning performance with no additional laborious annotations. We demonstrate the effectiveness of the proposed retrieval-guided method on COCO and Flickr30k captioning datasets, and show its superior captioning performance with more discriminative captions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xihui Liu (92 papers)
  2. Hongsheng Li (340 papers)
  3. Jing Shao (109 papers)
  4. Dapeng Chen (33 papers)
  5. Xiaogang Wang (230 papers)
Citations (130)
X Twitter Logo Streamline Icon: https://streamlinehq.com