Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Retrieval Robustness for Retrieval-Augmented Image Captioning (2406.02265v3)

Published 4 Jun 2024 in cs.CV and cs.CL

Abstract: Recent advances in retrieval-augmented models for image captioning highlight the benefit of retrieving related captions for efficient, lightweight models with strong domain-transfer capabilities. While these models demonstrate the success of retrieval augmentation, retrieval models are still far from perfect in practice: the retrieved information can sometimes mislead the model, resulting in incorrect generation and worse performance. In this paper, we analyze the robustness of a retrieval-augmented captioning model SmallCap. Our analysis shows that the model is sensitive to tokens that appear in the majority of the retrieved captions, and the input attribution shows that those tokens are likely copied into the generated output. Given these findings, we propose to train the model by sampling retrieved captions from more diverse sets. This decreases the chance that the model learns to copy majority tokens, and improves both in-domain and cross-domain performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenyan Li (8 papers)
  2. Jiaang Li (15 papers)
  3. Rita Ramos (5 papers)
  4. Raphael Tang (32 papers)
  5. Desmond Elliott (53 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.