Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection (2201.10474v2)

Published 25 Jan 2022 in cs.CL and cs.AI

Abstract: LLMs increasingly rely on massive web dumps for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and newswire often serve as anchors for automatically selecting web text most suitable for LLMing, a process typically referred to as quality filtering. Using a new dataset of U.S. high school newspaper articles -- written by students from across the country -- we investigate whose language is preferred by the quality filter used for GPT-3. We find that newspapers from larger schools, located in wealthier, educated, and urban ZIP codes are more likely to be classified as high quality. We then demonstrate that the filter's measurement of quality is unaligned with other sensible metrics, such as factuality or literary acclaim. We argue that privileging any corpus as high quality entails a language ideology, and more care is needed to construct training corpora for LLMs, with better transparency and justification for the inclusion or exclusion of various texts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Suchin Gururangan (29 papers)
  2. Dallas Card (20 papers)
  3. Sarah K. Dreier (1 paper)
  4. Emily K. Gade (1 paper)
  5. Leroy Z. Wang (2 papers)
  6. Zeyu Wang (137 papers)
  7. Luke Zettlemoyer (225 papers)
  8. Noah A. Smith (224 papers)
Citations (63)

Summary

We haven't generated a summary for this paper yet.