Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RGB-Infrared Cross-Modality Person Re-Identification via Joint Pixel and Feature Alignment (1910.05839v2)

Published 13 Oct 2019 in cs.CV

Abstract: RGB-Infrared (IR) person re-identification is an important and challenging task due to large cross-modality variations between RGB and IR images. Most conventional approaches aim to bridge the cross-modality gap with feature alignment by feature representation learning. Different from existing methods, in this paper, we propose a novel and end-to-end Alignment Generative Adversarial Network (AlignGAN) for the RGB-IR RE-ID task. The proposed model enjoys several merits. First, it can exploit pixel alignment and feature alignment jointly. To the best of our knowledge, this is the first work to model the two alignment strategies jointly for the RGB-IR RE-ID problem. Second, the proposed model consists of a pixel generator, a feature generator, and a joint discriminator. By playing a min-max game among the three components, our model is able to not only alleviate the cross-modality and intra-modality variations but also learn identity-consistent features. Extensive experimental results on two standard benchmarks demonstrate that the proposed model performs favorably against state-of-the-art methods. Especially, on SYSU-MM01 dataset, our model can achieve an absolute gain of 15.4% and 12.9% in terms of Rank-1 and mAP.

Overview of RGB-Infrared Cross-Modality Person Re-Identification via Joint Pixel and Feature Alignment

The paper presented explores advanced methodologies in the area of RGB-Infrared (RGB-IR) person re-identification (Re-ID), a crucial component in surveillance systems. The primary challenge in RGB-IR Re-ID lies in the significant modality gap between RGB and infrared images, which complicates the direct application of single-modality techniques. The authors introduce an innovative Alignment Generative Adversarial Network (AlignGAN), designed to address both cross-modality and intra-modality variations by jointly performing pixel and feature alignment.

Core Contributions

  1. Joint Pixel and Feature Alignment: The paper pioneers the combined use of pixel and feature alignment strategies to bridge the modality gap in Re-ID tasks. Traditional approaches typically rely on either feature alignment or pixel alignment individually. The proposed model is groundbreaking in modeling these alignment strategies jointly, thereby enhancing the RGB-IR Re-ID process.
  2. Robust Model Components: The AlignGAN framework comprises three central components— a pixel generator, a feature generator, and a joint discriminator. The pixel generator transforms RGB images into a domain similar to IR images (fake IR), which facilitates the learning process across both modalities. The feature generator aligns features from fake IR and real IR images in a shared feature space. Concurrently, the joint discriminator exercises a dual role: it distinguishes between real and synthetic image-feature pairs and guides the learning process to focus on identity-consistent features through adversarial training.
  3. Performance Results: Extensive experimentation on established benchmarks such as the SYSU-MM01 dataset demonstrates the model’s superior performance. The proposed AlignGAN achieved a 15.4% improvement in Rank-1 accuracy and a 12.9% increase in mean average precision (mAP) over previous state-of-the-art methods, showcasing its potential efficacy in real-world applications.

Implications and Future Directions

The introduction and validation of the AlignGAN framework contribute substantially to the theoretical and practical aspects of cross-modality Re-ID. From a theoretical perspective, the joint modeling of pixel and feature alignment offers a comprehensive approach that could be extended to other cross-modality or multi-modality recognition tasks. Practically, the ability to maintain identity-consistent features across different modalities showcases its applicability in real-world surveillance where conditions fluctuate between daylight and nighttime, necessitating a seamless transition between RGB and IR imaging systems.

Future explorations might delve into fine-tuning adversarial training techniques to further minimize identity inconsistency and modality gaps. Additionally, examining the scalability of AlignGAN for larger datasets or more complex surveillance scenarios could enhance its utility in diverse AI applications. The code availability (https://github.com/wangguanan/AlignGAN) also invites the broader research community to experiment and build upon this work, potentially leading to refined or new methods that leverage this joint alignment strategy for other complex visual recognition tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Guan'an Wang (6 papers)
  2. Tianzhu Zhang (61 papers)
  3. Jian Cheng (127 papers)
  4. Si Liu (130 papers)
  5. Yang Yang (884 papers)
  6. Zengguang Hou (6 papers)
Citations (326)