Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research (2112.01716v1)

Published 3 Dec 2021 in cs.LG, cs.CL, cs.CV, cs.CY, and stat.ML

Abstract: Benchmark datasets play a central role in the organization of machine learning research. They coordinate researchers around shared research problems and serve as a measure of progress towards shared goals. Despite the foundational role of benchmarking practices in this field, relatively little attention has been paid to the dynamics of benchmark dataset use and reuse, within or across machine learning subcommunities. In this paper, we dig into these dynamics. We study how dataset usage patterns differ across machine learning subcommunities and across time from 2015-2020. We find increasing concentration on fewer and fewer datasets within task communities, significant adoption of datasets from other tasks, and concentration across the field on datasets that have been introduced by researchers situated within a small number of elite institutions. Our results have implications for scientific evaluation, AI ethics, and equity/access within the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bernard Koch (5 papers)
  2. Emily Denton (18 papers)
  3. Alex Hanna (11 papers)
  4. Jacob G. Foster (9 papers)
Citations (122)