Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Actor-Critic Sequence Training for Image Captioning (1706.09601v2)

Published 29 Jun 2017 in cs.CV

Abstract: Generating natural language descriptions of images is an important capability for a robot or other visual-intelligence driven AI agent that may need to communicate with human users about what it is seeing. Such image captioning methods are typically trained by maximising the likelihood of ground-truth annotated caption given the image. While simple and easy to implement, this approach does not directly maximise the language quality metrics we care about such as CIDEr. In this paper we investigate training image captioning methods based on actor-critic reinforcement learning in order to directly optimise non-differentiable quality metrics of interest. By formulating a per-token advantage and value computation strategy in this novel reinforcement learning based captioning model, we show that it is possible to achieve the state of the art performance on the widely used MSCOCO benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Li Zhang (693 papers)
  2. Flood Sung (13 papers)
  3. Feng Liu (1213 papers)
  4. Tao Xiang (324 papers)
  5. Shaogang Gong (94 papers)
  6. Yongxin Yang (73 papers)
  7. Timothy M. Hospedales (69 papers)
Citations (109)

Summary

We haven't generated a summary for this paper yet.