Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Blind Spots of Model-Based Evaluation Metrics for Text Generation (2212.10020v3)

Published 20 Dec 2022 in cs.CL

Abstract: In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data. Basically, we design and synthesize a wide range of potential errors and check whether they result in a commensurate drop in the metric scores. We examine a range of recently proposed evaluation metrics based on pretrained LLMs, for the tasks of open-ended generation, translation, and summarization. Our experiments reveal interesting insensitivities, biases, or even loopholes in existing metrics. For example, we find that BERTScore is confused by truncation errors in summarization, and MAUVE (built on top of GPT-2) is insensitive to errors at the beginning or middle of generations. Further, we investigate the reasons behind these blind spots and suggest practical workarounds for a more reliable evaluation of text generation. We have released our code and data at https://github.com/cloudygoose/blindspot_nlg.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianxing He (36 papers)
  2. Jingyu Zhang (40 papers)
  3. Tianle Wang (30 papers)
  4. Sachin Kumar (68 papers)
  5. Kyunghyun Cho (292 papers)
  6. James Glass (173 papers)
  7. Yulia Tsvetkov (142 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com