SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification (2410.05057v1)
Abstract: Data curation is the problem of how to collect and organize samples into a dataset that supports efficient learning. Despite the centrality of the task, little work has been devoted towards a large-scale, systematic comparison of various curation methods. In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification. In order to generate baseline methods for the SELECT benchmark, we create a new dataset, ImageNet++, which constitutes the largest superset of ImageNet-1K to date. Our dataset extends ImageNet with 5 new training-data shifts, each approximately the size of ImageNet-1K itself, and each assembled using a distinct curation strategy. We evaluate our data curation baselines in two ways: (i) using each training-data shift to train identical image classification models from scratch (ii) using the data itself to fit a pretrained self-supervised representation. Our findings show interesting trends, particularly pertaining to recent methods for data curation such as synthetic data generation and lookup based on CLIP embeddings. We show that although these strategies are highly competitive for certain tasks, the curation strategy used to assemble the original ImageNet-1K dataset remains the gold standard. We anticipate that our benchmark can illuminate the path for new methods to further reduce the gap. We release our checkpoints, code, documentation, and a link to our dataset at https://github.com/jimmyxu123/SELECT.
- BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022.
- ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- Romain Beaumont. Clip retrieval: Easily compute clip embeddings and build a clip retrieval system with them. https://github.com/rom1504/clip-retrieval, 2022.
- Are we done with imagenet? CoRR, abs/2006.07159, 2020.
- Language models are few-shot learners. CoRR, abs/2005.14165, 2020.
- Emerging properties in self-supervised vision transformers. CoRR, abs/2104.14294, 2021.
- A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020.
- Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019.
- Excavating AI: the politics of images in machine learning training sets. AI & SOCIETY, 36(4):1105–1116, December 2021.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- Caption supervision enables robust learners, 2022.
- Distributionally robust classification on a data budget. arXiv preprint arXiv:2308.03821, 2023.
- DataComp: In search of the next generation of multimodal datasets. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 27092–27112. Curran Associates, Inc., 2023.
- Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. CoRR, abs/1811.12231, 2018.
- Detoxify. Github. https://github.com/unitaryai/detoxify, 2020.
- Benchmarking neural network robustness to common corruptions and perturbations. CoRR, abs/1903.12261, 2019.
- Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262–15271, 2021.
- Clipscore: A reference-free evaluation metric for image captioning. CoRR, abs/2104.08718, 2021.
- Gender and racial bias in visual question answering datasets. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22. ACM, June 2022.
- Jamie. How i found nearly 300,000 errors in ms-coco, 2022. Accessed: 2024-06-03.
- Rethinking fid: Towards a better evaluation metric for image generation, 2024.
- Scaling up visual and vision-language representation learning with noisy text supervision. CoRR, abs/2102.05918, 2021.
- Generative artificial intelligence consensus in a trustless network, 07 2023.
- The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956–1981, 2020.
- LambdaLabsML. lambda-diffusers: Open Source Diffusion Models. https://github.com/LambdaLabsML/lambda-diffusers, 2024.
- Bugs in the data: How imagenet misrepresents biodiversity, 2022.
- Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. ICML, 2021.
- Improving multimodal datasets with image captioning, 2023.
- Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 21455–21469. Curran Associates, Inc., 2022.
- Pervasive label errors in test sets destabilize machine learning benchmarks, 2021.
- Data cards: Purposeful and transparent dataset documentation for responsible ai. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022.
- Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
- Improved techniques for training gans, 2016.
- Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
- What makes imagenet look unlike laion, 2023.
- Learning robust global representations by penalizing local predictive power. CoRR, abs/1905.13549, 2019.
- Exploring clip for assessing the look and feel of images. In AAAI, 2023.
- Resnet strikes back: An improved training procedure in timm. ArXiv, abs/2110.00476, 2021.
- The visual task adaptation benchmark. CoRR, abs/1910.04867, 2019.
- ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object, March 2024. arXiv:2403.18775 [cs].
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.