Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GPT-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions (2310.15405v1)

Published 23 Oct 2023 in cs.CL

Abstract: There is growing interest in systems that generate captions for scientific figures. However, assessing these systems output poses a significant challenge. Human evaluation requires academic expertise and is costly, while automatic evaluation depends on often low-quality author-written captions. This paper investigates using LLMs as a cost-effective, reference-free method for evaluating figure captions. We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3,600 scientific figure captions, both original and machine-made, for 600 arXiv figures. We then prompted LLMs like GPT-4 and GPT-3 to score (1-6) each caption based on its potential to aid reader understanding, given relevant context such as figure-mentioning paragraphs. Results show that GPT-4, used as a zero-shot evaluator, outperformed all other models and even surpassed assessments made by Computer Science and Informatics undergraduates, achieving a Kendall correlation score of 0.401 with Ph.D. students rankings

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ting-Yao Hsu (11 papers)
  2. Chieh-Yang Huang (24 papers)
  3. Ryan Rossi (67 papers)
  4. Sungchul Kim (65 papers)
  5. C. Lee Giles (69 papers)
  6. Ting-Hao K. Huang (4 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.