Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cloth-Changing Person Re-identification from A Single Image with Gait Prediction and Regularization (2103.15537v4)

Published 29 Mar 2021 in cs.CV

Abstract: Cloth-Changing person re-identification (CC-ReID) aims at matching the same person across different locations over a long-duration, e.g., over days, and therefore inevitably meets challenge of changing clothing. In this paper, we focus on handling well the CC-ReID problem under a more challenging setting, i.e., just from a single image, which enables high-efficiency and latency-free pedestrian identify for real-time surveillance applications. Specifically, we introduce Gait recognition as an auxiliary task to drive the Image ReID model to learn cloth-agnostic representations by leveraging personal unique and cloth-independent gait information, we name this framework as GI-ReID. GI-ReID adopts a two-stream architecture that consists of a image ReID-Stream and an auxiliary gait recognition stream (Gait-Stream). The Gait-Stream, that is discarded in the inference for high computational efficiency, acts as a regulator to encourage the ReID-Stream to capture cloth-invariant biometric motion features during the training. To get temporal continuous motion cues from a single image, we design a Gait Sequence Prediction (GSP) module for Gait-Stream to enrich gait information. Finally, a high-level semantics consistency over two streams is enforced for effective knowledge regularization. Experiments on multiple image-based Cloth-Changing ReID benchmarks, e.g., LTCC, PRCC, Real28, and VC-Clothes, demonstrate that GI-ReID performs favorably against the state-of-the-arts. Codes are available at https://github.com/jinx-USTC/GI-ReID.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xin Jin (285 papers)
  2. Tianyu He (52 papers)
  3. Kecheng Zheng (48 papers)
  4. Zhiheng Yin (1 paper)
  5. Xu Shen (45 papers)
  6. Zhen Huang (114 papers)
  7. Ruoyu Feng (16 papers)
  8. Jianqiang Huang (62 papers)
  9. Xian-Sheng Hua (85 papers)
  10. Zhibo Chen (176 papers)
Citations (107)

Summary

Cloth-Changing Person Re-identification with Gait Prediction and Regularization

The paper presents a novel framework for addressing the challenging problem of cloth-changing person re-identification (CC-ReID) from a single image by leveraging gait information. The authors introduce a system named GI-ReID, which employs a two-stream architecture consisting of an image-based person ReID stream and an auxiliary gait recognition stream. This approach aims to facilitate accurate identity matching over long durations and diverse locations, especially in surveillance scenarios where clothing changes are common.

Key Components and Innovations

  1. Gait-Driven ReID: The primary innovation of this framework is the integration of gait recognition as an auxiliary task. Gait features, which are inherently cloth-independent, assist the ReID model in learning more robust and invariant representations of individuals. This is crucial in scenarios where clothing can vary significantly between observations.
  2. Two-Stream Architecture: The GI-ReID framework is composed of a ReID-Stream and a Gait-Stream. The ReID-Stream processes the RGB image data for re-identification, while the Gait-Stream serves as a regulator, training the network to focus on cloth-invariant features. The Gait-Stream is discarded during inference to maintain high efficiency.
  3. Gait Sequence Prediction: Since capturing a full gait sequence generally requires multiple frames, the authors propose a Gait Sequence Prediction (GSP) module. This module generates gait sequences from a single image, allowing the system to leverage temporal motion cues without needing video input.
  4. Semantics Consistency Constraint: The system enforces semantics consistency across the two streams, ensuring that features learned by both reflect the identity of the same person despite clothing changes. This constraint guides the ReID-Stream towards more generalized, cloth-invariant feature learning.

Performance Evaluation

The framework was extensively evaluated on several benchmarks, including LTCC, PRCC, Real28, and VC-Clothes, demonstrating favorable performance compared to the state-of-the-art methods. Specifically, the GI-ReID model showed significant improvements in Rank-1 and mAP scores, highlighting its ability to maintain accuracy despite clothing variations. Notably, the model achieved 10.4% mAP on Real28 and demonstrated strong performance across different settings.

Implications and Future Developments

The integration of gait recognition into person re-identification systems highlights a promising direction for enhancing the robustness of surveillance applications. This approach's ability to overcome clothing changes broadens its applicability in real-world scenarios where continuous monitoring over time is crucial.

Potential future developments could explore enhancing the accuracy of gait predictions from single images under diverse conditions, such as occlusions and varying camera angles. Additionally, the approach may be fine-tuned for other forms of biometric recognition, expanding its utility beyond person ReID.

Conclusion

The GI-ReID framework demonstrates a substantial advance in person re-identification technology by effectively combining gait analysis and image-based recognition. It offers a robust solution for scenarios plagued by clothing variability, setting a foundation for future research in leveraging biometric cues for re-identification tasks.