Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Evaluation of Attribution by Large Language Models (2305.06311v2)

Published 10 May 2023 in cs.CL

Abstract: A recent focus of LLM development, as exemplified by generative search engines, is to incorporate external references to generate and support its claims. However, evaluating the attribution, i.e., verifying whether the generated statement is fully supported by the cited reference, remains an open problem. Although human evaluation is common practice, it is costly and time-consuming. In this paper, we investigate the automatic evaluation of attribution given by LLMs. We begin by defining different types of attribution errors, and then explore two approaches for automatic evaluation: prompting LLMs and fine-tuning smaller LMs. The fine-tuning data is repurposed from related tasks such as question answering, fact-checking, natural language inference, and summarization. We manually curate a set of test examples covering 12 domains from a generative search engine, New Bing. Our results on this curated test set and simulated examples from existing benchmarks highlight both promising signals and challenges. We hope our problem formulation, testbeds, and findings will help lay the foundation for future studies on this important problem.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiang Yue (72 papers)
  2. Boshi Wang (16 papers)
  3. Ziru Chen (20 papers)
  4. Kai Zhang (542 papers)
  5. Yu Su (138 papers)
  6. Huan Sun (88 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.