Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 60 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification (2410.05057v1)

Published 7 Oct 2024 in cs.CV and cs.LG

Abstract: Data curation is the problem of how to collect and organize samples into a dataset that supports efficient learning. Despite the centrality of the task, little work has been devoted towards a large-scale, systematic comparison of various curation methods. In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification. In order to generate baseline methods for the SELECT benchmark, we create a new dataset, ImageNet++, which constitutes the largest superset of ImageNet-1K to date. Our dataset extends ImageNet with 5 new training-data shifts, each approximately the size of ImageNet-1K itself, and each assembled using a distinct curation strategy. We evaluate our data curation baselines in two ways: (i) using each training-data shift to train identical image classification models from scratch (ii) using the data itself to fit a pretrained self-supervised representation. Our findings show interesting trends, particularly pertaining to recent methods for data curation such as synthetic data generation and lookup based on CLIP embeddings. We show that although these strategies are highly competitive for certain tasks, the curation strategy used to assemble the original ImageNet-1K dataset remains the gold standard. We anticipate that our benchmark can illuminate the path for new methods to further reduce the gap. We release our checkpoints, code, documentation, and a link to our dataset at https://github.com/jimmyxu123/SELECT.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022.
  2. ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
  3. Romain Beaumont. Clip retrieval: Easily compute clip embeddings and build a clip retrieval system with them. https://github.com/rom1504/clip-retrieval, 2022.
  4. Are we done with imagenet? CoRR, abs/2006.07159, 2020.
  5. Language models are few-shot learners. CoRR, abs/2005.14165, 2020.
  6. Emerging properties in self-supervised vision transformers. CoRR, abs/2104.14294, 2021.
  7. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020.
  8. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019.
  9. Excavating AI: the politics of images in machine learning training sets. AI & SOCIETY, 36(4):1105–1116, December 2021.
  10. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  11. Caption supervision enables robust learners, 2022.
  12. Distributionally robust classification on a data budget. arXiv preprint arXiv:2308.03821, 2023.
  13. DataComp: In search of the next generation of multimodal datasets. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 27092–27112. Curran Associates, Inc., 2023.
  14. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. CoRR, abs/1811.12231, 2018.
  15. Detoxify. Github. https://github.com/unitaryai/detoxify, 2020.
  16. Benchmarking neural network robustness to common corruptions and perturbations. CoRR, abs/1903.12261, 2019.
  17. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262–15271, 2021.
  18. Clipscore: A reference-free evaluation metric for image captioning. CoRR, abs/2104.08718, 2021.
  19. Gender and racial bias in visual question answering datasets. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22. ACM, June 2022.
  20. Jamie. How i found nearly 300,000 errors in ms-coco, 2022. Accessed: 2024-06-03.
  21. Rethinking fid: Towards a better evaluation metric for image generation, 2024.
  22. Scaling up visual and vision-language representation learning with noisy text supervision. CoRR, abs/2102.05918, 2021.
  23. Generative artificial intelligence consensus in a trustless network, 07 2023.
  24. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956–1981, 2020.
  25. LambdaLabsML. lambda-diffusers: Open Source Diffusion Models. https://github.com/LambdaLabsML/lambda-diffusers, 2024.
  26. Bugs in the data: How imagenet misrepresents biodiversity, 2022.
  27. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. ICML, 2021.
  28. Improving multimodal datasets with image captioning, 2023.
  29. Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 21455–21469. Curran Associates, Inc., 2022.
  30. Pervasive label errors in test sets destabilize machine learning benchmarks, 2021.
  31. Data cards: Purposeful and transparent dataset documentation for responsible ai. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022.
  32. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  33. Improved techniques for training gans, 2016.
  34. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
  35. What makes imagenet look unlike laion, 2023.
  36. Learning robust global representations by penalizing local predictive power. CoRR, abs/1905.13549, 2019.
  37. Exploring clip for assessing the look and feel of images. In AAAI, 2023.
  38. Resnet strikes back: An improved training procedure in timm. ArXiv, abs/2110.00476, 2021.
  39. The visual task adaptation benchmark. CoRR, abs/1910.04867, 2019.
  40. ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object, March 2024. arXiv:2403.18775 [cs].

Summary

  • The paper introduces SELECT, a benchmark that systematically evaluates data curation strategies for image classification using the extended ImageNet++ dataset.
  • It employs two evaluation methods: training models from scratch and fine-tuning pretrained self-supervised representations to compare different curation approaches.
  • The findings highlight that while synthetic and CLIP-based retrieval techniques show promise, they require further refinement to meet the standard set by ImageNet-1K.

The paper "SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification" presents an important advancement in understanding data curation practices for image classification tasks. Data curation is recognized as the process of collecting and organizing data samples to facilitate effective learning. Although it is a crucial aspect of machine learning, there has been limited systematic analysis of different curation strategies on a large scale. This paper addresses this gap by introducing SELECT, the first comprehensive benchmark dedicated to evaluating various data curation methodologies.

The researchers introduce ImageNet++, a significant extension to the well-known ImageNet-1K, which serves as the core of the SELECT benchmark. ImageNet++ incorporates five additional training-data shifts, each mirroring ImageNet-1K in size but curated with distinct methodologies. This extension allows for a robust comparison of different curation strategies under controlled conditions.

Key elements of the research include:

  1. Benchmark Evaluation: The authors propose two primary evaluation methods:
    • Training identical image classification models from scratch across various data shifts.
    • Fitting the additional data to a pretrained self-supervised representation.
  2. Comparative Insights: The paper reveals interesting trends in data curation, particularly evaluating newer methods like synthetic data generation and retrieval based on CLIP embeddings. Despite their competitive performance in specific scenarios, the original ImageNet-1K curation strategy remains the benchmark standard.
  3. Implications for Future Research: The paper suggests that while recent methods show promise, there is room for further refinement to meet or surpass the existing ImageNet-1K results. SELECT is intended to drive innovation in data curation practices by providing a rigorous evaluation framework and a shared dataset for the research community to build upon.

The release of the accompanying code, documentation, and dataset further underscores the authors' commitment to fostering ongoing research and collaboration in the field. The GitHub repository offers resources to facilitate replication and extension of their findings, aiming to enhance the overall understanding and development of data curation strategies.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: