Papers
Topics
Authors
Recent
Search
2000 character limit reached

PFMG: Pyramidal Feature-aware Multimodal Gating

Updated 20 February 2026
  • The paper introduces PFMG, which hierarchically fuses multimodal features to preserve fine spatial details critical for small-object detection.
  • PFMG employs a three-step fusion process—hierarchical spatial gating, adaptive modality weighting, and gated feature fusion—to suppress cross-modal noise.
  • Empirical evaluations show that PFMG boosts mAP by up to 2.6 points on VEDAI and is 4–6× more efficient in detecting small objects compared to simpler fusion methods.

Pyramidal Feature-aware Multimodal Gating (PFMG) is a hierarchical multimodal fusion module designed to address cross-modal noise and detail loss in object detection pipelines working with aerial RGB and IR imagery. PFMG was introduced as a core component of the Pyramidal Adaptive Cross-Gating Network (PACGNet), in which it reconstructs a detailed and context-aware feature pyramid capable of preserving fine spatial details and adaptively integrating information across modalities (Gu et al., 20 Dec 2025).

1. Architectural Integration and Workflow

PFMG is integrated within a dual-stream detection backbone, exemplified by a YOLOv8-style pyramid with levels P2–P5. The overall multimodal network first extracts pyramid-level features for each modality (RGB and IR). Symmetrical Cross-Gating (SCG) modules are applied at levels P2–P4, refining the respective modality features by horizontal cross-modal gating. PFMG modules are then placed at pyramid levels P3, P4, and P5 (from finest to coarsest among the fused levels). Each PFMG operates on:

  • The SCG-refined RGB and IR features at the current level (Frgbl,FirlF^l_{\mathrm{rgb}}, F^l_{\mathrm{ir}}).
  • The fused output from the previous (immediately higher-resolution, finer) level (F^l−1\hat F^{l-1}).

This forms a top-down cascade: PFMG at level ll fuses its inputs to construct F^l\hat F^l, propagating fine-grained, high-resolution information down the pyramid and thus reconstructing a single deeply fused, detail-preserving feature hierarchy {F^3,F^4,F^5}\{\hat F^3, \hat F^4, \hat F^5\}.

2. Gating Mechanisms and Formal Computation

At each pyramid level ll, PFMG fusion comprises three sequential steps:

Step 1: Hierarchical Spatial Gate

A spatial prior is formed by concatenating the SCG-refined features from the previous, finer level:

Sl=[Frgbl−1,Firl−1]∈R2Hl−1×Wl−1×CS^{l} = [F^{l-1}_{\mathrm{rgb}}, F^{l-1}_{\mathrm{ir}}] \in \mathbb{R}^{2H_{l-1} \times W_{l-1} \times C}

A 3×3 strided convolution (stride=2) is applied to SlS^l, followed by a sigmoid, to produce the spatial gate Ml∈[0,1]Hl×Wl×1M^l \in [0,1]^{H_l \times W_l \times 1}:

Ml=σ(Conv3×3,s=2(Sl))M^l = \sigma (\mathrm{Conv}_{3\times3, s=2}(S^l))

MlM^l transmits spatial and structural details (essential for small-object detection) from finer to coarser levels.

Step 2: Modality Interaction and Adaptive Weighting

Current-level SCG-refined features are combined with two successive 1×1 convolutions:

Ul=Conv1×1[Conv1×1([Frgbl,Firl])]∈RHl×Wl×2CU^l = \mathrm{Conv}_{1\times1} \left[ \mathrm{Conv}_{1\times1} ([F^l_{\mathrm{rgb}}, F^l_{\mathrm{ir}}]) \right] \in \mathbb{R}^{H_l \times W_l \times 2C}

UlU^l is split back into GrgblG^l_{\mathrm{rgb}} and GirlG^l_{\mathrm{ir}}. Pixel-wise fusion weights are computed using a 1×1 convolution followed by a softmax over the modality channels:

[αrgbl,αirl]=Softmax(Conv1×1(Ul)),αrgbl(x,y)+αirl(x,y)=1[\alpha^l_{\mathrm{rgb}}, \alpha^l_{\mathrm{ir}}] = \mathrm{Softmax}(\mathrm{Conv}_{1\times1}(U^l)), \qquad \alpha^l_{\mathrm{rgb}}(x,y) + \alpha^l_{\mathrm{ir}}(x,y) = 1

Step 3: Hierarchically Gated Fusion

The fused features at each spatial location are computed by weighted sum:

Fbasel(x,y)=αrgbl(x,y)⊙Grgbl(x,y)+αirl(x,y)⊙Girl(x,y)F^l_{\mathrm{base}}(x,y) = \alpha^l_{\mathrm{rgb}}(x,y) \odot G^l_{\mathrm{rgb}}(x,y) + \alpha^l_{\mathrm{ir}}(x,y) \odot G^l_{\mathrm{ir}}(x,y)

Finally, these base fusions are modulated by the spatial gate (residual gating):

F^l(x,y)=[1+Ml(x,y)]⊙Fbasel(x,y)\hat F^l(x,y) = [1 + M^l(x,y)] \odot F^l_{\mathrm{base}}(x,y)

Each fused feature map F^l\hat F^l thus encodes both pixelwise adaptive cross-modal information and spatially coherent fine structure from higher-resolution levels.

3. Multimodal Detail Preservation and Noise Suppression

PFMG’s gating achieves both robust multimodal integration and strong spatial coherence:

  • Adaptive Fusion: The softmax-based α\alpha weights prioritize the more informative modality at each pixel (e.g., emphasizing IR for low-light scenes or suppressing overexposed RGB), attenuating cross-modal noise common in naive fusion strategies.
  • Hierarchical Guidance: The spatial gate MlM^l introduces structural priors from superior (finer) levels, preserving small-object edges and contours which tend to be lost in standard downsampling or aggregation schemes.
  • Small-object Sensitivity: By conditioning coarser level fusions on the outputs of finer levels, PFMG explicitly enables the propagation of cues necessary for detecting objects that may span only a handful of pixels, a well-documented challenge in aerial and remote sensing.

4. Implementation Parameters and Optimization

All PFMG operations adhere to the feature dimensionality CC established by the backbone (with C=256C=256 for P3/P4, C=512C=512 for P5). Convolutions within PFMG use the following configuration:

  • 3×3 spatial gate convolution: stride 2, no bias, followed by BatchNorm and sigmoid activation.
  • 1×1 interactions: Each followed by BatchNorm and ReLU, channel-wise softmax with temperature 1 for fusion weights.
  • No gating-specific regularizer is applied; standard weight decay (5×10−45 \times 10^{-4}) and momentum (0.937) are sufficient.
  • Whole-network training uses WIoU v3 loss for localization and binary cross-entropy for classification.
  • Training incorporates learning-rate warmup (3 epochs), and aggressive augmentation (Mosaic, flips, translations) for convergence.

5. Empirical Evaluation and Comparative Analysis

Extensive ablation studies on the VEDAI and DroneVehicle benchmarks demonstrate the impact and necessity of PFMG:

Configuration VEDAI mAP50 DroneVehicle mAP50
Baseline dual-stream YOLOv8 74.1% 80.1%
+PFMG only 76.7% 80.7%
+SCG only 76.6% 80.8%
+PFMG & SCG (PACGNet) 82.1% 81.7%

PFMG alone confers a 2.6-point gain on VEDAI and 0.6 on DroneVehicle, with the combination of PFMG and SCG producing a non-additive, 8.0-point increase on VEDAI. The computational cost of PFMG is modest (~0.4M parameters, ~0.7 GFLOPS on 640×640 input). When compared to simple addition or concatenation fusions, PFMG delivers a 4–6× improvement in small-object mAP per GFLOP.

Qualitatively, feature heatmaps from PACGNet concentrate activations cleanly on vehicle outlines, whereas baseline models display diffuse activations with increased false negatives on small objects and false positives in complex backgrounds.

6. Significance, Limitations, and Future Prospects

PFMG’s design addresses two persistent deficiencies in multimodal object detection: the tendency of naive fusion schemes to amplify cross-modal noise, and their failure to propagate essential multi-scale structure for small object detection. By leveraging hierarchical, detail-aware gating and pixel-adaptive cross-modal weighting, PFMG reconstructs a single-stream, deeply fused pyramid that maintains complementary information while mitigating the risk of detail loss.

A plausible implication is that the general gating principles established by PFMG are transferable to other multimodal hierarchical fusion tasks, especially where small-scale structural cues are critical and coarse fusion is insufficient. The presented data suggests that although the computational footprint is moderate, the benefit in small-target settings is substantial, particularly when combined with parallel horizontal gating as in PACGNet.

Further research may explore whether PFMG can be generalized beyond aerial detection to other domains with challenging small-object requirements, or adapted to other modality pairs beyond RGB and IR (Gu et al., 20 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Pyramidal Feature-aware Multimodal Gating (PFMG).