Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perceive Where to Focus: Learning Visibility-aware Part-level Features for Partial Person Re-identification (1904.00537v1)

Published 1 Apr 2019 in cs.CV

Abstract: This paper considers a realistic problem in person re-identification (re-ID) task, i.e., partial re-ID. Under partial re-ID scenario, the images may contain a partial observation of a pedestrian. If we directly compare a partial pedestrian image with a holistic one, the extreme spatial misalignment significantly compromises the discriminative ability of the learned representation. We propose a Visibility-aware Part Model (VPM), which learns to perceive the visibility of regions through self-supervision. The visibility awareness allows VPM to extract region-level features and compare two images with focus on their shared regions (which are visible on both images). VPM gains two-fold benefit toward higher accuracy for partial re-ID. On the one hand, compared with learning a global feature, VPM learns region-level features and benefits from fine-grained information. On the other hand, with visibility awareness, VPM is capable to estimate the shared regions between two images and thus suppresses the spatial misalignment. Experimental results confirm that our method significantly improves the learned representation and the achieved accuracy is on par with the state of the art.

Citations (290)

Summary

  • The paper introduces VPM, which learns visibility-aware part-level features to effectively address spatial misalignment in partial person re-identification.
  • It employs a self-supervision approach with a region locator and feature extractor to isolate visible regions despite occlusion.
  • VPM outperforms baselines on datasets like Partial-REID, achieving a 67.7% Rank-1 accuracy under challenging conditions.

Learning Visibility-aware Part-level Features for Partial Person Re-identification

The paper "Perceive Where to Focus: Learning Visibility-aware Part-level Features for Partial Person Re-identification" addresses a challenging scenario within the domain of person re-identification (re-ID), specifically focusing on partial re-ID. This problem arises when pedestrian images captured in various observations are not fully visible due to occlusions or when they are partially out of the camera view. Conventional re-ID approaches largely depend on holistic images and struggle considerably when faced with significant spatial misalignment between query and gallery images. Thus, a notable gap exists for effectively addressing partial re-ID scenarios.

Key Contributions

The authors introduce a novel model named Visibility-aware Part Model (VPM) to enhance the accuracy of partial re-ID. VPM utilizes self-supervision to detect which regions of a pedestrian image are visible, thereby allowing the model to extract fine-grained region-level features. This approach helps mitigate two primary challenges:

  • Spatial Misalignment: By focusing on shared regions visible in both query and gallery images, VPM suppresses spatial misalignment issues prevalent in partial re-ID scenarios.
  • Noise from Unshared Regions: The model eliminates distractions caused by comparing shared regions with unshared ones in holistic pedestrian images.

The proposed VPM framework learns from partial pedestrian images by detecting visible regions and subsequently extracting features specific to those parts. This methodology leverages a set of pre-defined regions on a holistic person image for supervised learning on convolutional feature maps.

Methodology

VPM comprises the following core components:

  1. Region Locator: The model identifies and labels visible regions on an image tensor using self-supervision signals. It predicts the probability for each pixel belonging to a specific region, aiding in capturing region-based visibility.
  2. Region Feature Extractor: After establishing region visibility, VPM extracts features from the identified regions, enhancing the model's computational focus to ensure discriminative feature extraction.
  3. Self-Supervision: An intrinsic component of VPM, self-supervision enables the system to predict visibility without external equipment or manually annotated data, making it adaptable for various settings and enhancing the model’s generalization.

The network is trained on existing holistic datasets with random cropping to simulate occlusion, thereby mimicking partial views. Key to the architecture’s performance is the balancing of cross-entropy and triplet loss functions during training, which together ensure robust regional feature learning.

Evaluation and Outcomes

VPM has been tested extensively against other leading re-ID methodologies using both realistic (Partial-REID and Partial-iLIDS) and synthetically generated large-scale datasets (Market-1501 and DukeMTMC-reID). Notably, the incorporation of visibility awareness in VPM allows it to outperform global feature learning baselines and even established part-based methods such as PCB on these datasets.

Numerical Evaluation Highlights:

  • On Partial-REID, VPM achieved 67.7% Rank-1 accuracy compared to lower accuracy rates by other methods, showcasing its superior adaptability to occluded scenarios.
  • The framework demonstrates strong scalability to larger datasets, confirming its viability as a comprehensive partial re-ID solution.

Implications and Future Work

The development of VPM marks an incremental yet significant progress within the field of person re-identification, particularly addressing partial re-ID challenges that conventional methods inadequately resolve. The model’s reliance on self-supervision for visibility prediction highlights a promising direction for future research, where gaining region-level attention can be further refined with more complex occlusion scenarios.

Moving forward, additional work could explore integrating depth estimation or contextual information to further enhance the visibility-aware models. Additionally, VPM’s architecture suggests potential applications beyond pedestrian re-ID, such as in vehicle or object recognition tasks within surveillance contexts.

The paper presents a crucial step towards improving accuracy and reliability in real-world applications where full visibility of subjects cannot always be guaranteed, paving the way for more advanced and adaptable identification systems.