Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dialect-robust Evaluation of Generated Text (2211.00922v1)

Published 2 Nov 2022 in cs.CL

Abstract: Evaluation metrics that are not robust to dialect variation make it impossible to tell how well systems perform for many groups of users, and can even penalize systems for producing text in lower-resource dialects. However, currently, there exists no way to quantify how metrics respond to change in the dialect of a generated utterance. We thus formalize dialect robustness and dialect awareness as goals for NLG evaluation metrics. We introduce a suite of methods and corresponding statistical tests one can use to assess metrics in light of the two goals. Applying the suite to current state-of-the-art metrics, we demonstrate that they are not dialect-robust and that semantic perturbations frequently lead to smaller decreases in a metric than the introduction of dialect features. As a first step to overcome this limitation, we propose a training schema, NANO, which introduces regional and language information to the pretraining process of a metric. We demonstrate that NANO provides a size-efficient way for models to improve the dialect robustness while simultaneously improving their performance on the standard metric benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiao Sun (29 papers)
  2. Thibault Sellam (19 papers)
  3. Elizabeth Clark (16 papers)
  4. Tu Vu (24 papers)
  5. Timothy Dozat (9 papers)
  6. Dan Garrette (21 papers)
  7. Aditya Siddhant (22 papers)
  8. Jacob Eisenstein (73 papers)
  9. Sebastian Gehrmann (48 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.