Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcing an Image Caption Generator Using Off-Line Human Feedback (1911.09753v1)

Published 21 Nov 2019 in cs.CV and cs.CL

Abstract: Human ratings are currently the most accurate way to assess the quality of an image captioning model, yet most often the only used outcome of an expensive human rating evaluation is a few overall statistics over the evaluation dataset. In this paper, we show that the signal from instance-level human caption ratings can be leveraged to improve captioning models, even when the amount of caption ratings is several orders of magnitude less than the caption training data. We employ a policy gradient method to maximize the human ratings as rewards in an off-policy reinforcement learning setting, where policy gradients are estimated by samples from a distribution that focuses on the captions in a caption ratings dataset. Our empirical evidence indicates that the proposed method learns to generalize the human raters' judgments to a previously unseen set of images, as judged by a different set of human judges, and additionally on a different, multi-dimensional side-by-side human evaluation procedure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Paul Hongsuck Seo (29 papers)
  2. Piyush Sharma (17 papers)
  3. Tomer Levinboim (8 papers)
  4. Bohyung Han (86 papers)
  5. Radu Soricut (54 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.