Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection (2201.10474v2)
Abstract: LLMs increasingly rely on massive web dumps for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and newswire often serve as anchors for automatically selecting web text most suitable for LLMing, a process typically referred to as quality filtering. Using a new dataset of U.S. high school newspaper articles -- written by students from across the country -- we investigate whose language is preferred by the quality filter used for GPT-3. We find that newspapers from larger schools, located in wealthier, educated, and urban ZIP codes are more likely to be classified as high quality. We then demonstrate that the filter's measurement of quality is unaligned with other sensible metrics, such as factuality or literary acclaim. We argue that privileging any corpus as high quality entails a language ideology, and more care is needed to construct training corpora for LLMs, with better transparency and justification for the inclusion or exclusion of various texts.
- Suchin Gururangan (29 papers)
- Dallas Card (20 papers)
- Sarah K. Dreier (1 paper)
- Emily K. Gade (1 paper)
- Leroy Z. Wang (2 papers)
- Zeyu Wang (137 papers)
- Luke Zettlemoyer (225 papers)
- Noah A. Smith (224 papers)