Papers
Topics
Authors
Recent
2000 character limit reached

Differentiable Channel Selection in Self-Attention For Person Re-Identification

Published 13 May 2025 in cs.CV and cs.LG | (2505.08961v1)

Abstract: In this paper, we propose a novel attention module termed the Differentiable Channel Selection Attention module, or the DCS-Attention module. In contrast with conventional self-attention, the DCS-Attention module features selection of informative channels in the computation of the attention weights. The selection of the feature channels is performed in a differentiable manner, enabling seamless integration with DNN training. Our DCS-Attention is compatible with either fixed neural network backbones or learnable backbones with Differentiable Neural Architecture Search (DNAS), leading to DCS with Fixed Backbone (DCS-FB) and DCS-DNAS, respectively. Importantly, our DCS-Attention is motivated by the principle of Information Bottleneck (IB), and a novel variational upper bound for the IB loss, which can be optimized by SGD, is derived and incorporated into the training loss of the networks with the DCS-Attention modules. In this manner, a neural network with DCS-Attention modules is capable of selecting the most informative channels for feature extraction so that it enjoys state-of-the-art performance for the Re-ID task. Extensive experiments on multiple person Re-ID benchmarks using both DCS-FB and DCS-DNAS show that DCS-Attention significantly enhances the prediction accuracy of DNNs for person Re-ID, which demonstrates the effectiveness of DCS-Attention in learning discriminative features critical to identifying person identities. The code of our work is available at https://github.com/Statistical-Deep-Learning/DCS-Attention.

Summary

  • The paper introduces DCS-Attention, a module that integrates differentiable channel selection in self-attention to enhance feature discrimination in person re-identification.
  • It employs a binary Gumbel-Softmax approximation and optimizes a composite loss combining cross-entropy, triplet, and IBB losses to ensure robust training.
  • Experimental validation on datasets like Market-1501 and MSMT17 demonstrates improved mAP and efficiency across both CNN and Transformer architectures.

The paper "Differentiable Channel Selection in Self-Attention For Person Re-Identification" (2505.08961) introduces a novel attention module called Differentiable Channel Selection Attention (DCS-Attention) to enhance the performance of deep neural networks (DNNs) for person re-identification (Re-ID). The core idea is to selectively use informative channels when computing attention weights in self-attention modules, which is motivated by the Information Bottleneck (IB) principle.

Standard self-attention modules use all input channels to compute the affinity matrix between tokens, potentially including noisy or irrelevant information. The paper argues that selecting the most informative channels for this computation can lead to more discriminative features, particularly for fine-grained tasks like person Re-ID.

The proposed DCS-Attention module integrates a differentiable channel selection mechanism into the self-attention computation. Given an input feature X∈RN×CX \in \mathbb{R}^{N \times C} (where NN is the number of tokens and CC is the number of channels), a binary decision mask M∈{0,1}N×C\mathcal{M} \in \{0,1\}^{N \times C} is learned. This mask indicates which channels are selected for each token. The attention weights AA are then computed using the masked features: A=σ((X⊙M)(X⊙M)⊤)A = \sigma((X \odot \mathcal{M})(X \odot \mathcal{M})^{\top}), where ⊙\odot is element-wise product and σ\sigma is Softmax.

To make the binary decision mask differentiable, the paper employs a simplified binary Gumbel-Softmax approximation during training. A linear layer is applied to the input features XX to generate parameters θ∈RN×C\theta \in \mathbb{R}^{N \times C}. The soft mask is computed using Mid=σ(θid+ϵid(1)−ϵid(2)τ)\mathcal{M}_{id} = \sigma \left( \frac{\theta_{id} + \epsilon_{id}^{(1)} - \epsilon_{id}^{(2)}}{\tau} \right), where σ\sigma is the Sigmoid function, ϵ(1),ϵ(2)\epsilon^{(1)}, \epsilon^{(2)} are Gumbel noise, and τ\tau is a temperature parameter. For the backward pass (gradient computation), a straight-through estimator is used, meaning gradients are passed through the hard binary mask Mid=1\mathcal{M}_{id} = 1 if Mid>0.5\mathcal{M}_{id} > 0.5 and $0$ otherwise. During inference, the Gumbel noise is set to 0, and the hard thresholding is applied.

The motivation behind channel selection is linked to the Information Bottleneck principle, which suggests learning representations that are maximally informative about the target variable (person identity YY) while being minimally informative about the input data variations (XX). The paper proposes to explicitly optimize the IB loss, defined as I(F,X)−I(F,Y)I(F,X) - I(F,Y), where FF is the learned feature representation and I(⋅,⋅)I(\cdot, \cdot) is mutual information. To make this loss optimizable by gradient descent, the paper derives a novel variational upper bound for the IB loss, termed IBB. This IBB is formulated such that it can be computed and optimized using SGD with minibatches. The training objective becomes a composite loss combining the standard cross-entropy loss, triplet loss (commonly used in Re-ID), and the IBB term:

Ltrain=CE+Triplet+η⋅IBB\mathcal{L}_{\text{train}} = \text{CE} + \text{Triplet} + \eta \cdot \text{IBB}

Here, η\eta is a balancing factor tuned via cross-validation. The IBB computation requires estimating probabilities based on learned features and input features belonging to class centroids and updating a variational distribution Q(F∣Y)Q(F|Y).

The DCS-Attention module and the IBB loss formulation can be integrated into various network architectures. The paper explores two main approaches:

  1. DCS with Fixed Backbone (DCS-FB): DCS-Attention modules are inserted after convolution stages in CNNs (like MobileNetV2, HRNet, ResNet50) or replace standard attention in Vision Transformers (like TransReID). These models are trained using the composite loss.
  2. DCS with Differentiable Neural Architecture Search (DCS-DNAS): DCS-Attention is integrated into a DNAS framework (specifically based on FBNetV2). Both the network architecture and the channel selection masks within the DCS modules are jointly learned during a search phase. The search loss includes the composite training loss and a latency cost term. After searching, the discovered architecture is retrained using the composite loss.

Practical Implementation Details:

  • Differentiable Mask: The Gumbel-Softmax relaxation with a straight-through estimator allows gradient-based optimization of the channel selection. The temperature parameter Ï„\tau in Gumbel-Softmax controls the approximation, typically annealing during training.
  • IBB Computation: Calculating IBB involves estimating conditional probabilities and mutual information terms. This requires maintaining and updating class centroids for both input and learned features and the variational distribution Q(F∣Y)Q(F|Y), which can be done per epoch or periodically based on accumulated batch statistics.
  • Network Integration:
    • For CNNs, DCS-Attention can be placed after feature extraction stages.
    • For Transformers, it replaces the standard multi-head self-attention mechanism, applying the channel selection to the Query and Key projections before computing the attention matrix.
  • Training: Standard optimizers like SGD or Adam can be used. Hyperparameters like learning rate schedules, weight decay, and data augmentation (random cropping, flipping, erasing, mixup) are standard for Re-ID training. The balance factor η\eta for the IBB loss term needs careful tuning using a validation set. The paper suggests η=1\eta=1 worked well across different setups.
  • DNAS: DCS-DNAS adds complexity as it involves a bi-level optimization problem (network weights vs. architecture parameters). The architecture parameters, including those for channel selection in DCS, are typically optimized using a different optimizer (e.g., Adam) and training subset than the network weights (e.g., SGD).

Experimental Validation and Practical Implications:

The paper validates the proposed methods on standard Re-ID datasets (Market-1501, DukeMTMC-reID, MSMT17).

  • Performance Improvement: DCS-FB models consistently outperform their baseline backbones, demonstrating the effectiveness of incorporating channel selection. For instance, DCS-FB (ResNet50) and DCS-FB (HRNet) show improvements over their standard counterparts. DCS-FB (TransReID) achieves state-of-the-art results on all three datasets, improving mAP by 2.4% on Market1501 compared to vanilla TransReID.
  • Efficiency: DCS-DNAS finds efficient architectures. DCS-DNAS (FBNetV2-XLarge), with ~1.9G FLOPs, outperforms models with significantly higher computational cost (e.g., ABD-Net with 14.1G FLOPs) on the challenging MSMT17 dataset.
  • IB Principle Validation: The ablation studies show that explicitly optimizing the IBB term leads to a lower actual IB loss and improved Re-ID performance, supporting the motivation that better adherence to the IB principle enhances discriminative feature learning. DCS-Attention without IBB already shows some improvement and IB loss reduction, suggesting that channel selection inherently favors more informative features. Explicit IBB optimization further boosts this.
  • Interpretability: Grad-CAM visualizations show that models trained with DCS-Attention and IBB attend more precisely to salient body parts critical for identification compared to baselines, providing a qualitative explanation for performance gains. t-SNE plots further illustrate improved inter-class separation and intra-class compactness of features learned by DCS models.
  • Training Time: The overhead introduced by DCS-Attention and IBB computation is relatively small, leading to only a slight increase in training time compared to baseline models (e.g., ~5.7% increase for DCS-FB (TransReID)).

In summary, the DCS-Attention module provides a practical method for integrating differentiable channel selection into self-attention for Re-ID. By coupling this with an Information Bottleneck-inspired training objective, the method effectively learns more discriminative features by focusing on relevant information channels, leading to state-of-the-art performance with manageable computational overhead, especially when integrated into efficient architectures found via DNAS. The method is versatile and can be applied to both CNN-based and Transformer-based backbones.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.