Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bag of Tricks and A Strong Baseline for Deep Person Re-identification (1903.07071v3)

Published 17 Mar 2019 in cs.CV

Abstract: This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline.

Citations (1,086)

Summary

  • The paper systematically evaluates training tricks by integrating them into a ResNet50 baseline for person re-identification.
  • It introduces BNNeck to separately optimize ID and triplet losses, enhancing cosine similarity and overall performance.
  • Detailed analysis of hyperparameters such as warmup learning rate and random erasing yields 94.5% Rank-1 and 85.9% mAP on Market1501.

Deep Dive into "Bag of Tricks and A Strong Baseline for Deep Person Re-identification"

"Bag of Tricks and A Strong Baseline for Deep Person Re-identification" by Hao Luo et al. presents a systematic exploration of various effective training techniques to optimize person re-identification (ReID) performance using deep neural networks. This paper scrutinizes several established tricks and introduces a solid baseline model to advance the field.

Key Contributions and Findings

The primary contributions of the paper are enumerated below:

  1. Systematic Evaluation and Integration of Training Tricks:
    • A collection of effective training tricks from existing literature is systematically evaluated and integrated into a standard ResNet50 based baseline.
    • The paper highlights the significance of a strong baseline for advancing ReID research, showing that many past methods were built on relatively weak baselines.
  2. BNNeck Architecture:
    • A novel neck structure named BNNeck is introduced. This involves adding a batch normalization layer after feature extraction.
    • With BNNeck, the model distinctly optimizes ID loss and triplet loss, resolving the inconsistency when training with both losses simultaneously.
    • BNNeck facilitates better performance on cosine distance metrics due to the normalized feature distribution.
  3. Detailed Analysis of Hyperparameters and Training Techniques:
    • Insightful analysis on the impact of warmup learning rate, random erasing augmentation, label smoothing, last stride, and center loss on model performance.
    • The experiments show that integrating these tricks raises the baseline performance significantly, achieving 94.5% rank-1 accuracy and 85.9% mAP on the Market1501 dataset using global features alone.

Experimental Insights

The results of the experiments are robust and rigorously analyzed:

  • Warmup Learning Rate:
    • Implementing a warmup strategy provides initial stabilization and leads to improved model performance compared to using a constant learning rate.
    • The learning rate is gradually increased in the initial epochs, before following a standard schedule.
  • Random Erasing Augmentation (REA):
    • REA addresses occlusion by randomly erasing parts of an input image during training.
    • Although it significantly boosts performance on within-domain tasks, its effectiveness slightly diminishes in cross-domain scenarios.
  • Label Smoothing (LS):
    • Label smoothing promotes better generalization by preventing the model from becoming too confident on training data.
    • It proved beneficial across various configurations, enhancing both within-domain and cross-domain performances.
  • Last Stride Adjustment:
    • Modifying the last stride from 2 to 1 in the ResNet50 backbone increases the feature map's spatial size, contributing to the model's robustness.
  • BNNeck and Center Loss:
    • The BNNeck invention led to considerable improvements by balancing the disparities between ID and triplet losses.
    • Center loss further refined the performance by penalizing distances between features and their corresponding class centers.

Implications and Future Directions

The model proposed in the paper sets a strong baseline, employing only global features while still outperforming many state-of-the-art methods.

Practical Implications:

  • The proposed tricks enable better-performing models without introducing significant computational overhead.
  • The model is particularly suitable for industrial applications where simplicity and efficiency are critical.

Theoretical Implications:

  • This work underscores the weaknesses in using weak baselines for evaluating new methods, establishing the importance of robust baselines.
  • The separation and specialization of loss functions via BNNeck could inspire more targeted approaches in other multi-loss scenarios.

Future Developments:

  • The exploration of additional training tricks and their integrations will be vital to further enhancements.
  • Investigating the applicability of BNNeck to more complex multi-branch models or other neural network architectures.
  • Extending this approach to other datasets and domains, further solidifying the approach's generalizability.

Overall, this paper presents a detailed paper into optimizing person ReID using deep learning. The proposed strong baseline and evaluated tricks provide an insightful reference for future research, emphasizing the significance of a methodical approach to training neural networks for ReID.

Github Logo Streamline Icon: https://streamlinehq.com