Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarks for Corruption Invariant Person Re-identification (2111.00880v2)

Published 1 Nov 2021 in cs.CV

Abstract: When deploying person re-identification (ReID) model in safety-critical applications, it is pivotal to understanding the robustness of the model against a diverse array of image corruptions. However, current evaluations of person ReID only consider the performance on clean datasets and ignore images in various corrupted scenarios. In this work, we comprehensively establish six ReID benchmarks for learning corruption invariant representation. In the field of ReID, we are the first to conduct an exhaustive study on corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01. After reproducing and examining the robustness performance of 21 recent ReID methods, we have some observations: 1) transformer-based models are more robust towards corrupted images, compared with CNN-based models, 2) increasing the probability of random erasing (a commonly used augmentation method) hurts model corruption robustness, 3) cross-dataset generalization improves with corruption robustness increases. By analyzing the above observations, we propose a strong baseline on both single- and cross-modality ReID datasets which achieves improved robustness against diverse corruptions. Our codes are available on https://github.com/MinghuiChen43/CIL-ReID.

Citations (22)

Summary

  • The paper introduces benchmarks simulating 20 corruption types with five severity levels to assess person ReID model robustness.
  • It demonstrates that transformer-based models excel under corruption while random erasing negatively impacts model performance.
  • The study reveals that enhanced corruption robustness improves cross-dataset generalization, establishing a strong baseline for future resilient ReID systems.

Benchmarks for Corruption Invariant Person Re-identification: A Summary

The paper "Benchmarks for Corruption Invariant Person Re-identification" by Minghui Chen, Zhiqiang Wang, and Feng Zheng addresses a crucial gap in the evaluation of person Re-identification (ReID) systems: the robustness of these models against image corruption. Traditional assessment of ReID models has predominantly focused on clean datasets, neglecting the real-world scenarios where images often suffer from various corruptions. This research offers a systematic approach by establishing corruption invariant ReID benchmarks across both single- and cross-modality datasets, specifically Market-1501, CUHK03, MSMT17, RegDB, and SYSU-MM01.

Core Contributions

The researchers introduce comprehensive benchmarks that simulate real-world corruptions by applying 20 corruption types with five severity levels across the above datasets. These are categorized into noise, blur, weather, and digital corruptions. The intent is to foster models that generalize better in unpredictable, degraded conditions without needing task-specific adaptations.

Significantly, this paper evaluates 21 state-of-the-art (SOTA) ReID methods, encompassing both CNN- and Transformer-based architectures, to understand their corruption robustness. The models were assessed under three scenarios: corrupted query, corrupted gallery, and both. The paper highlights critical observations:

  1. Transformer Robustness: Transformer-based models showed superior robustness to corrupted images compared to CNNs. This is attributed to their ability to capture more structured patterns even amidst corruption.
  2. Random Erasing Impact: The paper observes that increasing the probability of random erasing—a data augmentation technique—diminishes robustness against corruptions. It's hypothesized that this augmentation hinders the network's ability to harness detailed discriminative information.
  3. Cross-Dataset Correlation: Intriguingly, the paper refutes prior claims that robustness towards synthetic corruption does not aid with real-world distribution shifts. It is demonstrated that cross-dataset generalization tends to improve with enhanced corruption robustness.

Proposing a Robust Baseline

From these insights, the authors propose a strong ReID baseline incorporating local-based augmentation, consistent identity loss, and inference strategies to boost corruption robustness. This baseline employs soft random erasing and self patch mixing augmentations which mitigate severe occlusions traditionally introduced by random erasing.

Additionally, the novel consistent identity loss enforces smooth network responses across original and augmented samples, purportedly bolstering model performance consistency. Finally, they advocate for inference before employing the BNNeck layer to prevent domain-specific biases introduced by it.

Implications and Future Directions

The implications of this work extend not only to enhancing model reliability in security-assisted applications but also in bridging robustness to various unforeseen image degradations—potentially influencing tasks across broader domains. The integration of transformer architectures could inspire ongoing refinement in model architecture toward increased resilience.

Future research could explore the synergy between robustness, efficiency, and fairness. By sourcing or simulating data that can reflect real-world distributions more broadly, future benchmarks might more accurately encapsulate the challenges faced in uncontrolled environments. Moreover, exploring corruption robustness could inform techniques for natural distribution shifts, driving progress in generalizable ReID systems.

Overall, this paper offers a foundational step towards evaluating and improving the corruption resilience of person Re-identification systems, laying the groundwork for future advancements in the field.