Papers
Topics
Authors
Recent
2000 character limit reached

Occupancy-Gated Convolution (OGConv) Block

Updated 15 November 2025
  • The OGConv block is a specialized neural module that applies occupancy gating and compensation to process sparse 3D radar point clouds effectively.
  • It mitigates signal dilution from empty voxels by masking non-measurement areas and rescaling activations, ensuring consistent feature extraction.
  • Empirical integration within multi-view CNN-LSTM architectures shows up to a 5.12% accuracy gain with minimal additional computational overhead.

The Occupancy-Gated Convolution (OGConv) block is a specialized neural module designed to address the unique challenges inherent to sparse 3D point clouds, notably those produced by millimeter-wave radar for human activity recognition. Standard convolutional paradigms assume local density and continuity, leading to compromised effectiveness when applied to domains where spatial inputs are predominantly empty or sparsely occupied. The OGConv block introduces explicit occupancy gating and compensation to stabilize activations, preserve sensitivity, and maintain statistical consistency in feature extraction from highly sparse data.

1. Motivation and Signal Handling in Sparse Point Clouds

Conventional convolutional layers malfunction in the context of sparse radar point clouds for two reasons: first, the dilution of signal, by which the superfluity of zeros from empty voxels drives feature responses towards zero and impedes gradient flow; second, the emergence of inconsistent activation statistics between receptive fields with varying counts of occupied voxels. This destabilizes subsequent normalization and hinders effective learning. The OGConv block mitigates these issues via two principles:

  • Occupancy gating: Ensures that only voxels containing actual measurements contribute to intermediate computations.
  • Occupancy compensation: Applies a normalization factor reflecting the true versus expected population of the receptive field, restoring feature magnitude and stabilizing batch statistics.

2. Formal Mathematical Construction

OGConv operates on two input tensors: a feature map FRN×Cin×H×WF \in \mathbb{R}^{N \times C_{in} \times H \times W} and its corresponding occupancy indicator map O{0,1}N×1×H×WO \in \{0,1\}^{N \times 1 \times H \times W}. Kernel weights WRCout×Cin×kh×kwW \in \mathbb{R}^{C_{out} \times C_{in} \times k_h \times k_w} span K=khkwK = k_h \cdot k_w spatial positions per convolution.

Occupancy gating:

Fm=FOF_m = F \odot O

where \odot denotes element-wise multiplication, yielding masked features with zeros in unoccupied regions.

Raw convolution:

Yraw=Conv(Fm; W)Y_{\mathrm{raw}} = \mathrm{Conv}(F_m;\ W)

Occupancy counting:

D=Conv(O; 1)D = \mathrm{Conv}(O;\ \mathbf{1})

with 1\mathbf{1} as an all-ones kernel, so DD records how many voxels are populated in each output cell's receptive field.

Occupancy compensation:

Y[i]={Yraw[i]KD[i]if D[i]>0 0if D[i]=0Y[i] = \begin{cases} Y_{\mathrm{raw}}[i] \cdot \frac{K}{D[i]} & \text{if } D[i] > 0 \ 0 & \text{if } D[i] = 0 \end{cases}

This operation rescales the output only where measurements exist, zeroing cells with no contribution.

3. Architectural Structure and Block Implementation

The OGConv block comprises three parallel, coordinated branches:

  • Gating branch: Propagates and updates binary occupancy mask, ensuring that only valid positions are processed.
  • Convolution branch: Applies standard convolutional filters to masked features.
  • Compensation branch: Computes local occupancy, produces the rescaling factor, and adjusts feature magnitude accordingly.

Pseudocode for a forward pass is as follows:

1
2
3
4
5
6
7
8
9
10
def OGConv(F, O):
    F_m = F * O                        # Occupancy gating
    Y_raw = Conv2D(F_m, W)             # Feature extraction
    D = Conv2D(O, ones_kernel)         # Occupancy counting
    scale = K / (D + epsilon)          # Compensation (epsilon for stability)
    Y_comp = Y_raw * scale             # Rescaled output
    mask_out = (D > 0).float()         # Zero-inactive regions
    Y = Y_comp * mask_out
    O_out = mask_out
    return Y, O_out
A small ϵ\epsilon (e.g., 10610^{-6}) can ensure numerical stability during division.

4. Integration within Parallel-CNN Bi-LSTM Networks

Within the OG-PCL tri-view architecture, three orthogonal 2D projections—top, front, and side—are processed in parallel, each with a dedicated OGConv stack. In each branch, the projection (Ptv)(P_t^v) and its occupancy mask (Otv)(O_t^v) are passed sequentially through OGConv, batch normalization, rectified linear activation, and average pooling. Output features are then globally pooled and linearly projected into per-view embeddings. These are concatenated across views and input to a bidirectional LSTM, whose last hidden state is classified via a fully-connected softmax layer. This structure enables the network to encode complementary spatial information while maintaining computational efficiency.

5. Empirical Validation and Performance Characteristics

Comprehensive ablation studies confirm the OGConv block’s impact:

  • Substituting standard convolutions with OGConv in OG-PCL yields a +1.46%+1.46\% accuracy improvement (from 90.29%90.29\% to 91.75%91.75\%) with only +0.06+0.06M additional parameters.
  • Removal of occupancy compensation (i.e., omitting K/DK/D scaling) reduces accuracy to 90.54%90.54\%; precision remains high ($0.9209$), but recall diminishes, substantiating the compensation mechanism’s role in balancing sensitivity to sparse stimuli.
  • The tri-view approach yields +1.97%+1.97\% to +5.12%+5.12\% gains over any single view, indicating that occupancy-aware computation is synergistic with multi-projection processing.

6. Computational Complexity and Deployment Considerations

The OGConv block introduces minimal parameter overhead relative to standard convolutions. Each block requires only standard convolution weights; most OGConv layers use 3×33\times 3 or 1×11\times 1 kernels, leading to 0.06\sim0.06M total extra parameters in a typical deployment. Operational complexity is slightly increased compared to standard convolutions due to the additional occupancy-counting convolution, but this is amortized by aggressive spatial downsampling.

On contemporary hardware, such as the NVIDIA A100, OGConv-augmented networks achieve real-time inference. Further efficiency can be obtained through quantization and fused implementations of feature gating and counting. Techniques such as grouped or depthwise convolutions in early layers and minimal kernel sizes (e.g., 3×33\times 3) are recommended to reduce FLOPs.

7. Extensions and Practical Generalization

Accurate application of OGConv in other sparse modalities (e.g., LIDAR data, event cameras) requires judicious selection of the occupancy threshold; while the canonical form uses D>0D>0, tightening this criterion (e.g., DτD\geq\tau for some τ\tau) may enhance denoising in notoriously cluttered measurements. Clamping the compensation scaling factor (using min(K/D,αmax)\min(K/D, \alpha_{\text{max}})) can prevent excessive amplification for extremely sparse receptive fields. At the hardware level, memory-efficient implementation is possible by fusing counting and feature convolutions to reduce coalescence overhead.

In summary, the Occupancy-Gated Convolution block systematically addresses the pathologies of sparse voxelized inputs by combining explicit masking and adaptive activation rescaling, stabilizing learned representations and supporting high-performance, lightweight deployment in radar-based human activity recognition pipelines (Yan et al., 12 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Occupancy-Gated Convolution (OGConv) Block.