Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AQuA: A Benchmarking Tool for Label Quality Assessment (2306.09467v2)

Published 15 Jun 2023 in cs.LG

Abstract: Machine learning (ML) models are only as good as the data they are trained on. But recent studies have found datasets widely used to train and evaluate ML models, e.g. ImageNet, to have pervasive labeling errors. Erroneous labels on the train set hurt ML models' ability to generalize, and they impact evaluation and model selection using the test set. Consequently, learning in the presence of labeling errors is an active area of research, yet this field lacks a comprehensive benchmark to evaluate these methods. Most of these methods are evaluated on a few computer vision datasets with significant variance in the experimental protocols. With such a large pool of methods and inconsistent evaluation, it is also unclear how ML practitioners can choose the right models to assess label quality in their data. To this end, we propose a benchmarking environment AQuA to rigorously evaluate methods that enable machine learning in the presence of label noise. We also introduce a design space to delineate concrete design choices of label error detection models. We hope that our proposed design space and benchmark enable practitioners to choose the right tools to improve their label quality and that our benchmark enables objective and rigorous evaluation of machine learning tools facing mislabeled data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mononito Goswami (17 papers)
  2. Vedant Sanil (2 papers)
  3. Arjun Choudhry (14 papers)
  4. Arvind Srinivasan (8 papers)
  5. Chalisa Udompanyawit (1 paper)
  6. Artur Dubrawski (67 papers)
Citations (3)
Github Logo Streamline Icon: https://streamlinehq.com