Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification (2403.08270v2)

Published 13 Mar 2024 in cs.CV

Abstract: Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing. Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features. In addition, due to the absence of explicit supervision to keep the model constantly focused on cloth-irrelevant areas, existing methods are still hampered by the disruption of clothing variations. To solve the above issues, we propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task. Specifically, to help the model extract cloth-irrelevant clues, we propose a Clothes Diversity Augmentation (CDA), which generates more realistic cloth-changing samples by enriching the clothing color while preserving the texture. In addition, a Multi-scale Constraint Block (MCB) is designed, which extracts fine-grained identity-related features and effectively transfers cloth-irrelevant knowledge. Moreover, a Counterfactual-guided Attention Module (CAM) is presented, which learns cloth-irrelevant features from channel and space dimensions and utilizes the counterfactual intervention for supervising the attention map to highlight identity-related regions. Finally, a Semantic Alignment Constraint (SAC) is designed to facilitate high-level semantic feature interaction. Comprehensive experiments on four CC-ReID datasets indicate that our method outperforms prior state-of-the-art approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen, “Feature completion for occluded person re-identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 44, no. 9, pp. 4894–4912, 2021.
  2. T. Si, F. He, H. Wu, and Y. Duan, “Spatial-driven features based on image dependencies for person re-identification,” Pattern Recognition (PR), vol. 124, p. 108462, 2022.
  3. M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep learning for person re-identification: a survey and outlook,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 44, no. 6, pp. 2872–2893, 2022.
  4. T. Si, F. He, Z. Zhang, and Y. Duan, “Hybrid contrastive learning for unsupervised person re-identification,” IEEE Transactions on Multimedia (TMM), 2022.
  5. T. Si, F. He, P. Li, and X. Gao, “Tri-modality consistency optimization with heterogeneous augmented images for visible-infrared person re-identification,” Neurocomputing, vol. 523, pp. 170–181, 2023.
  6. L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian, “Glad: global-local-alignment descriptor for scalable person re-identification,” IEEE Transactions on Multimedia (TMM), vol. 21, no. 4, pp. 986–999, 2018.
  7. H. Luo, W. Jiang, Y. Gu, F. Liu, X. Liao, S. Lai, and J. Gu, “A strong baseline and batch normalization neck for deep person re-identification,” IEEE Transactions on Multimedia (TMM), vol. 22, no. 10, pp. 2597–2609, 2019.
  8. C. Zhao, X. Lv, Z. Zhang, W. Zuo, J. Wu, and D. Miao, “Deep fusion feature representation learning with hard mining center-triplet loss for person re-identification,” IEEE Transactions on Multimedia (TMM), vol. 22, no. 12, pp. 3180–3195, 2020.
  9. Z. Zeng, Z. Wang, Z. Wang, Y. Zheng, Y.-Y. Chuang, and S. Satoh, “Illumination-adaptive person re-identification,” IEEE Transactions on Multimedia (TMM), vol. 22, no. 12, pp. 3064–3074, 2020.
  10. C. Eom, W. Lee, G. Lee, and B. Ham, “Is-gan: learning disentangled representation for robust person re-identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 44, no. 12, pp. 8975–8991, 2021.
  11. P. Hong, T. Wu, A. Wu, X. Han, and W.-S. Zheng, “Fine-grained shape-appearance mutual learning for cloth-changing person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10 513–10 522.
  12. Z. Cui, J. Zhou, Y. Peng, S. Zhang, and Y. Wang, “Dcr-reid: deep component reconstruction for cloth-changing person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), vol. 33, no. 8, pp. 4415–4428, 2023.
  13. Z. Yang, X. Zhong, Z. Zhong, H. Liu, Z. Wang, and S. Satoh, “Win-win by competition: auxiliary-free cloth-changing person re-identification,” IEEE Transactions on Image Processing (TIP), vol. 32, pp. 2985–2999, 2023.
  14. G. Zhang, J. Liu, Y. Chen, Y. Zheng, and H. Zhang, “Multi-biometric unified network for cloth-changing person re-identification,” IEEE Transactions on Image Processing (TIP), vol. 32, pp. 4555–4566, 2023.
  15. P. Guo, H. Liu, J. Wu, G. Wang, and T. Wang, “Semantic-aware consistency network for cloth-changing person re-identification,” in Proceedings of the 31st ACM International Conference on Multimedia (ACM MM), 2023, pp. 8730–8739.
  16. Q. Wang, X. Qian, Y. Fu, and X. Xue, “Co-attention aligned mutual cross-attention for cloth-changing person re-identification,” in Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 2270–2288.
  17. X. Shu, G. Li, X. Wang, W. Ruan, and Q. Tian, “Semantic-guided pixel sampling for cloth-changing person re-identification,” IEEE Signal Processing Letters (SPL), vol. 28, pp. 1365–1369, 2021.
  18. X. Jia, X. Zhong, M. Ye, W. Liu, and W. Huang, “Complementary data augmentation for cloth-changing person re-identification,” IEEE Transactions on Image Processing (TIP), vol. 31, pp. 4227–4239, 2022.
  19. Z. Zhao, B. Liu, Y. Lu, Q. Chu, N. Yu, and C. W. Chen, “Joint identity-aware mixstyle and graph-enhanced prototype for clothes-changing person re-identification,” IEEE Transactions on Multimedia (TMM), pp. 1–12, 2023.
  20. K. Han, S. Gong, Y. Huang, L. Wang, and T. Tan, “Clothing-change feature augmentation for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 22 066–22 075.
  21. J. Chen, W.-S. Zheng, Q. Yang, J. Meng, R. Hong, and Q. Tian, “Deep shape-aware person re-identification for overcoming moderate clothing changes,” IEEE Transactions on Multimedia (TMM), vol. 24, pp. 4285–4300, 2021.
  22. X. Jin, T. He, K. Zheng, Z. Yin, X. Shen, Z. Huang, R. Feng, J. Huang, Z. Chen, and X.-S. Hua, “Cloth-changing person re-identification from a single image with gait prediction and regularization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14 278–14 287.
  23. Z. Yu, Y. Zhao, B. Hong, Z. Jin, J. Huang, D. Cai, and X.-S. Hua, “Apparel-invariant feature learning for person re-identification,” IEEE Transactions on Multimedia (TMM), vol. 24, pp. 4482–4492, 2021.
  24. W. Xu, H. Liu, W. Shi, Z. Miao, Z. Lu, and F. Chen, “Adversarial feature disentanglement for long-term person re-identification.” in Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI), 2021, pp. 1201–1207.
  25. Z. Yang, M. Lin, X. Zhong, Y. Wu, and Z. Wang, “Good is bad: causality inspired cloth-debiasing for cloth-changing person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1472–1481.
  26. L. Yao, Z. Chu, S. Li, Y. Li, J. Gao, and A. Zhang, “A survey on causal inference,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 15, no. 5, pp. 1–46, 2021.
  27. L. Chen, X. Yan, J. Xiao, H. Zhang, S. Pu, and Y. Zhuang, “Counterfactual samples synthesizing for robust visual question answering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10 800–10 809.
  28. Y. Niu, K. Tang, H. Zhang, Z. Lu, X.-S. Hua, and J.-R. Wen, “Counterfactual vqa: a cause-effect look at language bias,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 12 700–12 710.
  29. H. Dou, P. Zhang, W. Su, Y. Yu, Y. Lin, and X. Li, “Gaitgci: generative counterfactual intervention for gait recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 5578–5588.
  30. D. Lopez-Paz, R. Nishihara, S. Chintala, B. Scholkopf, and L. Bottou, “Discovering causal signals in images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6979–6987.
  31. X. Li, Y. Lu, B. Liu, Y. Liu, G. Yin, Q. Chu, J. Huang, F. Zhu, R. Zhao, and N. Yu, “Counterfactual intervention feature transfer for visible-infrared person re-identification,” in Proceedings of the European Conference on Computer Vision (ECCV), 2022, pp. 381–398.
  32. Z. Sun and F. Zhao, “Counterfactual attention alignment for visible-infrared cross-modality person re-identification,” Pattern Recognition Letters (PRL), vol. 168, pp. 79–85, 2023.
  33. Y. Rao, G. Chen, J. Lu, and J. Zhou, “Counterfactual attention learning for fine-grained visual categorization and re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1025–1034.
  34. T.-J. Fu, X. E. Wang, S. Grafton, M. Eckstein, and W. Y. Wang, “Sscr: iterative language-based image editing via self-supervised counterfactual reasoning,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, p. 4413–4422.
  35. Y. Goyal, Z. Wu, J. Ernst, D. Batra, D. Parikh, and S. Lee, “Counterfactual visual explanations,” in Proceedings of the International Conference on Machine Learning (ICML), 2019, pp. 2376–2384.
  36. P. Li, Y. Xu, Y. Wei, and Y. Yang, “Self-correction for human parsing,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 44, no. 6, pp. 3260–3271, 2020.
  37. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  38. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM (Commun. ACM), vol. 60, no. 6, pp. 84–90, 2017.
  39. F. Chen, N. Wang, J. Tang, P. Yan, and J. Yu, “Unsupervised person re-identification via multi-domain joint learning,” Pattern Recognition (PR), vol. 138, p. 109369, 2023.
  40. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7132–7141.
  41. K. Tang, Y. Niu, J. Huang, J. Shi, and H. Zhang, “Unbiased scene graph generation from biased training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3716–3725.
  42. A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017.
  43. Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline),” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 480–496.
  44. X. Qian, W. Wang, L. Zhang, F. Zhu, Y. Fu, T. Xiang, Y.-G. Jiang, and X. Xue, “Long-term cloth-changing person re-identification,” in Proceedings of the Asian Conference on Computer Vision (ACCV), 2020, pp. 71–88.
  45. Q. Yang, A. Wu, and W.-S. Zheng, “Person re-identification by contour sketch under moderate clothing change,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 43, no. 6, pp. 2029–2046, 2019.
  46. F. Wan, Y. Wu, X. Qian, Y. Chen, and Y. Fu, “When person re-identification meets changing clothes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 830–831.
  47. P. Xu and X. Zhu, “Deepchange: a long-term person re-identification benchmark with clothes change,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11 196–11 205.
  48. L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: a benchmark,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2015, pp. 1116–1124.
  49. W. Li, X. Zhu, and S. Gong, “Harmonious attention network for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2285–2294.
  50. R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen, “Interaction-and-aggregation network for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9317–9326.
  51. K. Zhu, H. Guo, Z. Liu, M. Tang, and J. Wang, “Identity-guided human semantic parsing for person re-identification,” in Proceedings of the European Conference on Computer Vision (ECCV), 2020, pp. 346–363.
  52. X. Gu, H. Chang, B. Ma, S. Bai, S. Shan, and X. Chen, “Clothes-changing person re-identification with rgb modality only,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 1060–1069.
  53. M. Liu, Z. Ma, T. Li, Y. Jiang, and K. Wang, “Long-term person re-identification with dramatic appearance change: algorithm and benchmark,” in Proceedings of the 30th ACM International Conference on Multimedia (ACM MM), 2022, pp. 6406–6415.
  54. Y. Liu, H. Ge, Z. Wang, Y. Hou, and M. Zhao, “Clothes-changing person re-identification via universal framework with association and forgetting learning,” IEEE Transactions on Multimedia (TMM), pp. 1–14, 2023.
  55. H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 1487–1495.
  56. Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020, pp. 13 001–13 008.
  57. I. Loshchilov and F. Hutter, “Fixing weight decay regularization in adam,” arxiv preprint arXiv:1711.05101, 2018.
  58. G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou, “Learning discriminative features with multiple granularities for person re-identification,” in Proceedings of the 26th ACM International Conference on Multimedia (ACM MM), 2018, pp. 274–282.
  59. T. Chen, S. Ding, J. Xie, Y. Yuan, W. Chen, Y. Yang, Z. Ren, and Z. Wang, “Abd-net: Attentive but diverse person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8351–8361.
  60. K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 3702–3712.
  61. Z. Zhang, C. Lan, W. Zeng, X. Jin, and Z. Chen, “Relation-aware global attention for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3186–3195.
  62. Y. Huang, J. Xu, Q. Wu, Y. Zhong, P. Zhang, and Z. Zhang, “Beyond scalar neuron: adopting vector-neuron capsules for long-term person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), vol. 30, no. 10, pp. 3459–3471, 2019.
  63. S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “Transreid: transformer-based object re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15 013–15 022.
  64. H. Li, G. Wu, and W.-S. Zheng, “Combined depth space based architecture search for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 6729–6738.
  65. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research (JMLR), vol. 9, no. 86, pp. 2579–2605, 2008.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Peini Guo (2 papers)
  2. Mengyuan Liu (72 papers)
  3. Hong Liu (395 papers)
  4. Ruijia Fan (1 paper)
  5. Guoquan Wang (5 papers)
  6. Bin He (58 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com