TIGEr: Text-to-Image Grounding for Image Caption Evaluation (1909.02050v1)
Abstract: This paper presents a new metric called TIGEr for the automatic evaluation of image captioning systems. Popular metrics, such as BLEU and CIDEr, are based solely on text matching between reference captions and machine-generated captions, potentially leading to biased evaluations because references may not fully cover the image content and natural language is inherently ambiguous. Building upon a machine-learned text-image grounding model, TIGEr allows to evaluate caption quality not only based on how well a caption represents image content, but also on how well machine-generated captions match human-generated captions. Our empirical tests show that TIGEr has a higher consistency with human judgments than alternative existing metrics. We also comprehensively assess the metric's effectiveness in caption evaluation by measuring the correlation between human judgments and metric scores.
- Ming Jiang (59 papers)
- Qiuyuan Huang (23 papers)
- Lei Zhang (1689 papers)
- Xin Wang (1307 papers)
- Pengchuan Zhang (58 papers)
- Zhe Gan (135 papers)
- Jana Diesner (21 papers)
- Jianfeng Gao (344 papers)