CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification (2401.05806v2)
Abstract: Visible-infrared person re-identification (VIReID) primarily deals with matching identities across person images from different modalities. Due to the modality gap between visible and infrared images, cross-modality identity matching poses significant challenges. Recognizing that high-level semantics of pedestrian appearance, such as gender, shape, and clothing style, remain consistent across modalities, this paper intends to bridge the modality gap by infusing visual features with high-level semantics. Given the capability of CLIP to sense high-level semantic information corresponding to visual representations, we explore the application of CLIP within the domain of VIReID. Consequently, we propose a CLIP-Driven Semantic Discovery Network (CSDN) that consists of Modality-specific Prompt Learner, Semantic Information Integration (SII), and High-level Semantic Embedding (HSE). Specifically, considering the diversity stemming from modality discrepancies in language descriptions, we devise bimodal learnable text tokens to capture modality-private semantic information for visible and infrared images, respectively. Additionally, acknowledging the complementary nature of semantic details across different modalities, we integrate text features from the bimodal language descriptions to achieve comprehensive semantics. Finally, we establish a connection between the integrated text features and the visual features across modalities. This process embed rich high-level semantic information into visual representations, thereby promoting the modality invariance of visual representations. The effectiveness and superiority of our proposed CSDN over existing methods have been substantiated through experimental evaluations on multiple widely used benchmarks. The code will be released at \url{https://github.com/nengdong96/CSDN}.
- H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 1487–1495.
- M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 2872–2893, 2022.
- M. Zhang, Y. Xiao, F. Xiong, S. Li, Z. Cao, Z. Fang, and J. T. Zhou, “Person re-identification with hierarchical discriminative spatial aggregation,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 516–530, 2022.
- N. Dong, L. Zhang, S. Yan, H. Tang, and J. Tang, “Erasing, transforming, and noising defense network for occluded person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2023.
- A. Wu, W.-S. Zheng, H.-X. Yu, S. Gong, and J. Lai, “Rgb-infrared cross-modality person re-identification,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5390–5399.
- H. Li, M. Liu, Z. Hu, F. Nie, and Z. Yu, “Intermediary-guided bidirectional spatial–temporal aggregation network for video-based visible-infrared person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 9, pp. 4962–4972, 2023.
- X. Lin, J. Li, Z. Ma, H. Li, S. Li, K. Xu, G. Lu, and D. Zhang, “Learning modal-invariant and temporal-memory for video-based visible-infrared person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 973–20 982.
- Z. Wang, Z. Wang, Y. Zheng, Y.-Y. Chuang, and S. Satoh, “Learning to reduce dual-level discrepancy for infrared-visible person re-identification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 618–626.
- G. Wang, T. Zhang, J. Cheng, S. Liu, Y. Yang, and Z. Hou, “Rgb-infrared cross-modality person re-identification via joint pixel and feature alignment,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3623–3632.
- S. Choi, S. Lee, Y. Kim, T. Kim, and C. Kim, “Hi-cmd: Hierarchical cross-modality disentanglement for visible-infrared person re-identification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 257–10 266.
- G.-A. Wang, T. Zhang, Y. Yang, J. Cheng, J. Chang, X. Liang, and Z.-G. Hou, “Cross-modality paired-images generation for rgb-infrared person re-identification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 12 144–12 151.
- P. Dai, R. Ji, H. Wang, Q. Wu, and Y. Huang, “Cross-modality person re-identification with generative adversarial training.” in IJCAI, vol. 1, no. 3, 2018, p. 6.
- Z. Feng, J. Lai, and X. Xie, “Learning modality-specific representations for visible-infrared person re-identification,” IEEE Transactions on Image Processing, vol. 29, pp. 579–590, 2019.
- K. Kansal, A. V. Subramanyam, Z. Wang, and S. Satoh, “Sdl: Spectrum-disentangled representation learning for visible-infrared person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3422–3432, 2020.
- Y. Hao, N. Wang, X. Gao, J. Li, and X. Wang, “Dual-alignment feature embedding for cross-modality person re-identification,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 57–65.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
- S. Li, L. Sun, and Q. Li, “Clip-reid: Exploiting vision-language model for image re-identification without concrete text labels,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, 2023, pp. 1405–1413.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763.
- Z. Wang, Y. Lu, Q. Li, X. Tao, Y. Guo, M. Gong, and T. Liu, “Cris: Clip-driven referring image segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11 686–11 695.
- Y. Rao, W. Zhao, G. Chen, Y. Tang, Z. Zhu, G. Huang, J. Zhou, and J. Lu, “Denseclip: Language-guided dense prediction with context-aware prompting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 082–18 091.
- S. Yan, N. Dong, L. Zhang, and J. Tang, “Clip-driven fine-grained text-image person re-identification,” IEEE Transactions on Image Processing, vol. 32, pp. 6032–6046, 2023.
- S. He, W. Chen, K. Wang, H. Luo, F. Wang, W. Jiang, and H. Ding, “Region generation and assessment network for occluded person re-identification,” arXiv preprint arXiv:2309.03558, 2023.
- Y. Lin, C. Liu, Y. Chen, J. Hu, B. Yin, B. Yin, and Z. Wang, “Exploring part-informed visual-language learning for person re-identification,” arXiv preprint arXiv:2308.02738, 2023.
- D. Zhang, Z. Zhang, Y. Ju, C. Wang, Y. Xie, and Y. Qu, “Dual mutual learning for cross-modality person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 8, pp. 5361–5373, 2022.
- N. Dong, S. Yan, H. Tang, J. Tang, and L. Zhang, “Multi-view information integration and propagation for occluded person re-identification,” Information Fusion, vol. 104, p. 102201, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1566253523005171
- C. Song, Y. Huang, W. Ouyang, and L. Wang, “Mask-guided contrastive attention model for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1179–1188.
- M. Tian, S. Yi, H. Li, S. Li, X. Zhang, J. Shi, J. Yan, and X. Wang, “Eliminating background-bias for robust person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5794–5803.
- X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y.-G. Jiang, and X. Xue, “Pose-normalized image generation for person re-identification,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 650–667.
- Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang et al., “Fd-gan: Pose-guided feature distilling gan for robust person re-identification,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- C.-P. Tay, S. Roy, and K.-H. Yap, “Aanet: Attribute attention network for person re-identifications,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7134–7143.
- H. Li, Y. Chen, D. Tao, Z. Yu, and G. Qi, “Attribute-aligned domain-invariant feature learning for unsupervised domain adaptation person re-identification,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1480–1494, 2021.
- H. Li, N. Dong, Z. Yu, D. Tao, and G. Qi, “Triple adversarial learning and multi-view imaginative reasoning for unsupervised domain adaptation person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 5, pp. 2814–2830, 2022.
- D. Li, X. Wei, X. Hong, and Y. Gong, “Infrared-visible cross-modal person re-identification with an x modality,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 04, 2020, pp. 4610–4617.
- Z. Zhang, S. Jiang, C. Huang, Y. Li, and R. Y. Da Xu, “Rgb-ir cross-modality person reid based on teacher-student gan model,” Pattern Recognition Letters, vol. 150, pp. 155–161, 2021.
- X. Zhong, T. Lu, W. Huang, M. Ye, X. Jia, and C.-W. Lin, “Grayscale enhancement colorization network for visible-infrared person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1418–1430, 2021.
- Z. Wei, X. Yang, N. Wang, and X. Gao, “Rbdf: Reciprocal bidirectional framework for visible infrared person reidentification,” IEEE Transactions on Cybernetics, vol. 52, no. 10, pp. 10 988–10 998, 2022.
- A. Wu, W.-S. Zheng, S. Gong, and J. Lai, “Rgb-ir person re-identification by cross-modality similarity preservation,” International journal of computer vision, vol. 128, pp. 1765–1785, 2020.
- Y. Hao, N. Wang, J. Li, and X. Gao, “Hsme: Hypersphere manifold embedding for visible thermal person re-identification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 8385–8392.
- H. Liu, X. Tan, and X. Zhou, “Parameter sharing exploration and hetero-center triplet loss for visible-thermal person re-identification,” IEEE Transactions on Multimedia, vol. 23, pp. 4414–4425, 2020.
- Y. Ling, Z. Luo, Y. Lin, and S. Li, “A multi-constraint similarity learning with adaptive weighting for visible-thermal person re-identification.” in IJCAI, 2021, pp. 845–851.
- M. Ye, J. Shen, D. J. Crandall, L. Shao, and J. Luo, “Dynamic dual-attentive aggregation learning for visible-infrared person re-identification,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16. Springer, 2020, pp. 229–247.
- J. Zhao, H. Wang, Y. Zhou, R. Yao, S. Chen, and A. E. Saddik, “Spatial-channel enhanced transformer for visible-infrared person re-identification,” IEEE Transactions on Multimedia, vol. 25, pp. 3668–3680, 2023.
- Y. Chen, L. Wan, Z. Li, Q. Jing, and Z. Sun, “Neural feature search for rgb-infrared person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 587–597.
- Z. Zhao, B. Liu, Q. Chu, Y. Lu, and N. Yu, “Joint color-irrelevant consistency learning and identity-aware modality adaptation for visible-infrared cross modality person re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 4, 2021, pp. 3520–3528.
- C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig, “Scaling up visual and vision-language representation learning with noisy text supervision,” in International conference on machine learning. PMLR, 2021, pp. 4904–4916.
- L. Yao, R. Huang, L. Hou, G. Lu, M. Niu, H. Xu, X. Liang, Z. Li, X. Jiang, and C. Xu, “Filip: Fine-grained interactive language-image pre-training,” arXiv preprint arXiv:2111.07783, 2021.
- Y.-C. Chen, L. Li, L. Yu, A. El Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu, “Uniter: Universal image-text representation learning,” in European conference on computer vision. Springer, 2020, pp. 104–120.
- K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” International Journal of Computer Vision, vol. 130, no. 9, pp. 2337–2348, 2022.
- P. Gao, S. Geng, R. Zhang, T. Ma, R. Fang, Y. Zhang, H. Li, and Y. Qiao, “Clip-adapter: Better vision-language models with feature adapters,” International Journal of Computer Vision, pp. 1–15, 2023.
- Z. Novack, J. McAuley, Z. C. Lipton, and S. Garg, “Chils: Zero-shot image classification with hierarchical label sets,” in International Conference on Machine Learning. PMLR, 2023, pp. 26 342–26 362.
- S. Zhao, Z. Zhang, S. Schulter, L. Zhao, B. Vijay Kumar, A. Stathopoulos, M. Chandraker, and D. N. Metaxas, “Exploiting unlabeled data with vision and language models for object detection,” in European Conference on Computer Vision. Springer, 2022, pp. 159–175.
- Z. Zhou, Y. Lei, B. Zhang, L. Liu, and Y. Liu, “Zegclip: Towards adapting clip for zero-shot semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11 175–11 185.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248–255.
- X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803.
- D. T. Nguyen, H. G. Hong, K. W. Kim, and K. R. Park, “Person recognition system based on a combination of body images from visible light and thermal cameras,” Sensors, vol. 17, no. 3, p. 605, 2017.
- J. Liu, J. Wang, N. Huang, Q. Zhang, and J. Han, “Revisiting modality-specific feature compensation for visible-infrared person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 10, pp. 7226–7240, 2022.
- H. Park, S. Lee, J. Lee, and B. Ham, “Learning by aligning: Visible-infrared person re-identification using cross-modal correspondences,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 12 046–12 055.
- Y. Gao, T. Liang, Y. Jin, X. Gu, W. Liu, Y. Li, and C. Lang, “Mso: Multi-feature space joint optimization network for rgb-infrared person re-identification,” in Proceedings of the 29th ACM international conference on multimedia, 2021, pp. 5257–5265.
- Q. Wu, P. Dai, J. Chen, C.-W. Lin, Y. Wu, F. Huang, B. Zhong, and R. Ji, “Discover cross-modality nuances for visible-infrared person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4330–4339.
- M. Ye, W. Ruan, B. Du, and M. Z. Shou, “Channel augmented joint learning for visible-infrared recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 13 567–13 576.
- X. Zheng, X. Chen, and X. Lu, “Visible-infrared person re-identification via partially interactive collaboration,” IEEE Transactions on Image Processing, vol. 31, pp. 6951–6963, 2022.
- C. Chen, M. Ye, M. Qi, J. Wu, J. Jiang, and C.-W. Lin, “Structure-aware positional transformer for visible-infrared person re-identification,” IEEE Transactions on Image Processing, vol. 31, pp. 2352–2364, 2022.
- H. Sun, J. Liu, Z. Zhang, C. Wang, Y. Qu, Y. Xie, and L. Ma, “Not all pixels are matched: Dense contrastive learning for cross-modality person re-identification,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 5333–5341.
- Y. Zhang, Y. Kang, S. Zhao, and J. Shen, “Dual-semantic consistency learning for visible-infrared person re-identification,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 1554–1565, 2022.
- B. Yang, J. Chen, and M. Ye, “Towards grand unified representation learning for unsupervised visible-infrared person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 11 069–11 079.
- T. Liang, Y. Jin, W. Liu, and Y. Li, “Cross-modality transformer with modality mining for visible-infrared person re-identification,” IEEE Transactions on Multimedia, vol. 25, pp. 8432–8444, 2023.
- H. Lu, X. Zou, and P. Zhang, “Learning progressive modality-shared transformers for effective visible-infrared person re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, 2023, pp. 1835–1843.
- M. Ye, Z. Wu, C. Chen, and B. Du, “Channel augmentation for visible-infrared re-identification,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1–16, 2023.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2921–2929.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.