Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop (1506.03365v3)

Published 10 Jun 2015 in cs.CV

Abstract: While there has been remarkable progress in the performance of visual recognition algorithms, the state-of-the-art models tend to be exceptionally data-hungry. Large labeled training datasets, expensive and tedious to produce, are required to optimize millions of parameters in deep network models. Lagging behind the growth in model capacity, the available datasets are quickly becoming outdated in terms of size and density. To circumvent this bottleneck, we propose to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop. Starting from a large set of candidate images for each category, we iteratively sample a subset, ask people to label them, classify the others with a trained model, split the set into positives, negatives, and unlabeled based on the classification confidence, and then iterate with the unlabeled set. To assess the effectiveness of this cascading procedure and enable further progress in visual recognition research, we construct a new image dataset, LSUN. It contains around one million labeled images for each of 10 scene categories and 20 object categories. We experiment with training popular convolutional networks and find that they achieve substantial performance gains when trained on this dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fisher Yu (104 papers)
  2. Ari Seff (15 papers)
  3. Yinda Zhang (68 papers)
  4. Shuran Song (110 papers)
  5. Thomas Funkhouser (66 papers)
  6. Jianxiong Xiao (14 papers)
Citations (2,162)

Summary

A Comprehensive Examination of LSUN: A Large-Scale Image Dataset by Deep Learning with Human Integration

The paper "LSUN: Construction of a Large-Scale Image Dataset using Deep Learning with Humans in the Loop" by Fisher Yu et al. addresses a critical bottleneck in the field of visual recognition - the scarcity and outdated nature of large-scale annotated datasets. This research offers a method to expand dataset density and scale by integrating human effort with an automated labeling pipeline. The proposed LSUN dataset comprises around one million labeled images for each of 10 scene categories and 20 object categories, making significant strides in dataset construction for deep learning applications.

Dataset Construction Methodology

The research elucidates a semi-automatic process for labeling large datasets to mitigate the extensive manual effort typically required. For each target category, a vast initial pool of candidate images is obtained through a keyword search, followed by a cascade of iterative labeling strategies. Each iteration encompasses four steps:

  1. Random Sampling: A small subset of images is randomly sampled from the initial pool.
  2. Human Labeling: This subset is manually labeled by crowdsourced workers.
  3. Classifier Training: A convolutional neural network is trained on this labeled subset to classify the remaining images.
  4. Label Propagation: The trained classifier predicts the labels and confidences of the remaining images, which are then divided into positive, negative, and unlabeled sets based on classification confidence.

This iterative process continues until the remaining unlabeled subset is sufficiently small to be manually annotated, ensuring efficiency in both human and computational resources.

Data Collection and Density

The authors emphasize the density of the LSUN dataset. Currently available datasets suffer from low-density images per category, resulting in deep networks learning noisy and unstable features. LSUN counters this by aiming for around one million images per category, achieving a density ten times that of the Places dataset and a hundred times that of ImageNet.

The collection involved leveraging large-scale image search engines with targeted queries to gather around 100 million candidate URLs per category, resulting in a diverse and extensive set of images that underwent initial quality checks before entering the labeling pipeline.

Deep Learning Integration and Human Verification

The classifiers used in the pipeline transition between multi-layer perceptrons and fine-tuned deep networks such as AlexNet and GoogLeNet. The iterative nature ensures that with each cycle, the classifiers, enriched with additional labeled examples, become increasingly adept at discerning between positive and negative classes, significantly reducing the volume of images requiring manual annotation.

Furthermore, the methods ensure high precision in the final labeled data, with statistical tests confirming that over 90% of labels adhere to ground truth classifications. This precision, while slightly lower than fully manual methods, is deemed sufficient to train high-performance classification models.

Experimental Validation and Performance Gains

The paper reports significant improvements in the performance of convolutional neural networks trained on the LSUN dataset. For scene categories, fine-tuning a standard AlexNet model with LSUN data produces a 22.37% improvement in error rates when tested against the Places dataset. Additionally, performing the same analysis with the PASCAL VOC 2012 dataset shows that models pre-trained on LSUN outperform those pre-trained on ImageNet, underscoring that a denser set of relevant category images can be more beneficial than a broader but sparser set of categories.

Practical and Theoretical Implications

This methodology and the resulting LSUN dataset have substantial implications for the field of visual recognition. The semi-automatic pipeline presents a scalable solution to rapidly enhance the volume and quality of training data corresponding with the growing capacity of modern deep learning models. This alignment is critical for continuing advancements in deep visual learning, mitigating the existing lag between dataset density and model complexity.

Future Directions

The paper hints at the continued expansion of the LSUN dataset, which will remain freely accessible to the community. This foresight invites ongoing experimentation and could lead to further refinements in scalable data labeling practices. Future developments could involve more sophisticated human-in-the-loop methods, cross-domain generalizability tests, or exploration of additional categories that could benefit the wider AI community.

In essence, this work addresses a significant research gap by providing a high-density, large-scale dataset paired with an innovative approach to labeling, ensuring that visual recognition keeps pace with the evolving capacities of deep learning models.