Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CrowdHuman: A Benchmark for Detecting Human in a Crowd (1805.00123v1)

Published 30 Apr 2018 in cs.CV

Abstract: Human detection has witnessed impressive progress in recent years. However, the occlusion issue of detecting human in highly crowded environments is far from solved. To make matters worse, crowd scenarios are still under-represented in current human detection benchmarks. In this paper, we introduce a new dataset, called CrowdHuman, to better evaluate detectors in crowd scenarios. The CrowdHuman dataset is large, rich-annotated and contains high diversity. There are a total of $470K$ human instances from the train and validation subsets, and $~22.6$ persons per image, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. Baseline performance of state-of-the-art detection frameworks on CrowdHuman is presented. The cross-dataset generalization results of CrowdHuman dataset demonstrate state-of-the-art performance on previous dataset including Caltech-USA, CityPersons, and Brainwash without bells and whistles. We hope our dataset will serve as a solid baseline and help promote future research in human detection tasks.

Citations (621)

Summary

  • The paper presents the CrowdHuman dataset with triple-layer annotations for full-body, visible region, and head detection to address occlusion challenges in crowded environments.
  • The dataset contains around 470K annotated human instances across over 24K images, markedly exceeding previous benchmarks in density and diversity.
  • Experiments with state-of-the-art models like FPN and RetinaNet show that pre-training on CrowdHuman enhances detection performance across multiple challenging datasets.

An Essay on "CrowdHuman: A Benchmark for Detecting Human in a Crowd"

The paper "CrowdHuman: A Benchmark for Detecting Human in a Crowd" presents an impactful contribution to the domain of human detection within highly crowded environments. The authors address the pervasive challenge of occlusion in human detection—a scenario inadequately represented in existing benchmarks. This work introduces the CrowdHuman dataset, which is meticulously designed to evaluate human detection systems in cluttered and occlusion-heavy scenes, providing a comprehensive approach to interpret occlusion and improve detection models.

Contributions and Analysis

The CrowdHuman dataset distinguishes itself by its considerable size and diversity, containing approximately 470K annotated human instances across 15,000 training, 4,370 validation, and 5,000 testing images. Each image averages 22.6 human instances—markedly higher than previous datasets. The dataset is enriched by annotations at three levels: full-body bounding boxes, visible-region bounding boxes, and head bounding boxes. The triple-layered annotation approach introduces new possibilities for tackling occlusion and enhances the robustness of detection frameworks.

The dataset's statistical examination indicates a significant advancement over earlier datasets like Caltech-USA, CityPersons, and KITTI both in terms of density and annotation comprehensiveness. One major virtue of CrowdHuman is its explicit targeting of crowd scenarios, where overlap between instances is much higher—a strategic move aimed at addressing the limitations of existing benchmarks.

Experimental Evaluation

Baseline performances are provided using state-of-the-art detection models, including FPN (Feature Pyramid Network) and RetinaNet. The results on the CrowdHuman dataset exhibit relatively high difficulty levels, with mMR (miss rate) and AP (average precision) serving as primary metrics. FPN consistently outperforms RetinaNet across different detection tasks (visible body, full body, head), indicating the necessity for robust architectures capable of discerning between proximal instances in crowded settings.

The dataset also demonstrates strong cross-dataset generalization capabilities. When pre-trained models on CrowdHuman are fine-tuned on other benchmarks such as Caltech, CityPersons, COCOPersons, and Brainwash, they achieve state-of-the-art performance, underscoring the dataset's utility as a powerful pre-training tool. For instance, in Caltech pedestrian detection, models pre-trained on CrowdHuman significantly improved mMR to 3.46%, setting a new standard.

Implications and Future Directions

The introduction of the CrowdHuman dataset has several implications. Practically, it provides a well-grounded basis for developing human detection systems critical for applications such as surveillance, autonomous navigation, and human-computer interaction. Theoretically, the dataset catalyzes further research into handling occlusions and complex crowd interactions, motivating innovations in architectural design, feature extraction, and data augmentation strategies.

Future research could delve into multi-modal data fusion, enriching detection systems with additional sensory inputs like lidar or radar for comprehensive understanding. Additionally, exploring transfer learning potentials from the CrowdHuman dataset across various domains could yield further insights into universal occlusion handling strategies.

By thoroughly addressing the crowd detection problem, this paper sets forth a cornerstone dataset that invites subsequent advancements in AI, primarily focusing on real-world scenarios laden with ambiguity due to occlusion. Overall, CrowdHuman serves as an indispensable resource towards a more nuanced human detection capability in crowded environments.