Papers
Topics
Authors
Recent
2000 character limit reached

Hierarchical Warping & Occlusion-Aware Noise Suppression

Updated 18 November 2025
  • The paper introduces a pyramidal coarse-to-fine architecture that efficiently captures large displacements and refines flow iteratively.
  • It employs a novel sampling-based correlation layer that bypasses interpolation artifacts, effectively mitigating ghosting in feature warping.
  • The method integrates explicit occlusion-aware cost reweighting within a shared decoder, yielding significant performance gains on benchmarks like Sintel and KITTI.

Hierarchical Warping and Occlusion-Aware Noise Suppression refers to architectural and algorithmic strategies for optical flow estimation networks, focused on addressing the challenges posed by feature warping artifacts (notably ghosting) and ambiguous matches in occluded regions. These methods are exemplified by the OAS-Net (Occlusion Aware Sampling Network), which replaces traditional warping-based correlation with a sampling-based alternative and integrates explicit occlusion-aware cost reweighting. This combination suppresses noise propagated by occlusions and interpolation, yielding superior flow estimates in challenging scenarios (Kong et al., 2021).

1. Pyramidal Coarse-to-Fine Architecture

Hierarchical (pyramidal) processing is foundational in contemporary optical flow estimation. In OAS-Net, a shared two-layer convolutional subnetwork recursively constructs 6-level feature pyramids for both input images, with each level kk representing a spatial downsampling by 2k2^k and increasing channels: [16, 32, 64, 96, 128, 160] for %%%%2%%%%.

Flow is estimated progressively from coarse (level 6) to fine (level 1):

  • At level kk, the flow f^k+1{\hat f}^{k+1} and occlusion map Ok+1O^{k+1} are upsampled by 2 (ufu_f, uOu_O).
  • A matching cost volume is computed using sampling-based correlation (see Section 2).
  • The raw cost volume and uOu_O feed into an occlusion-aware module, producing coakc_{oa}^k.
  • A shared decoder operates on [f1k,uf,coak][f_1^k, u_f, c_{oa}^k], outputting a flow residual Δfk\Delta f^k and an updated occlusion map OkO^k.
  • The refined flow is fk=uf+Δfkf^k = u_f + \Delta f^k.

This multiscale design enables the system to efficiently capture large displacements and refine flow iteratively.

2. Sampling-Based Correlation Layer

The pivotal methodological innovation is the sampling-based correlation. Standard networks such as PWC-Net deploy feature warping—interpolating target features spatially according to the predicted flow—prior to local inner product correlation. OAS-Net, in contrast, eschews explicit warping altogether.

Correlation at each pixel xx and displacement offset dd (with dR|d|\leq R, R=4R=4 by default) is computed as: ck(x,d)=f1k(x),  f2k(x+uf(x)+d)c^k(x, d) = \langle f_1^k(x),\; f_2^k(x + u_f(x) + d)\rangle where ,\langle \cdot, \cdot \rangle denotes channel-wise inner product.

This process samples features from the predicted target locations plus a search window, but does not physically shift or interpolate the grid. Therefore, the operation avoids introducing interpolation artifacts and local inconsistencies.

3. Ghosting and Noise in Feature Warping

Feature warping has a known pathology: ghosting. When multiple source locations are mapped to the same warped target location (frequent in occlusions or fast motions), bilinear interpolation aggregates disparate pixel values, resulting in ambiguous, duplicated features (“ghosts”). This can corrupt cost volume construction and thus flow estimation.

Sampling-based correlation addresses this by querying target features independently at specified locations; there is no many-to-one mixing. The result is a cost volume intrinsically robust to aliasing and less affected by motion boundary artifacts. The methodology never physically alters the target feature grid, which precludes the formation of local ghosts.

4. Occlusion-Aware Cost Volume Reweighting

Occluded regions are prone to unreliable matches, as true correspondences do not exist. OAS-Net introduces an explicit occlusion-awareness mechanism:

  • Each pyramid level maintains an occlusion-awareness map Ok1(x)[0,1]O^{k-1}(x)\in[0,1], estimating the non-occlusion likelihood, upsampled for current use (uOu_O).
  • Complementary weights are defined: Wnonocc(x)=uO(x)W_\text{nonocc}(x) = u_O(x), Wocc(x)=1uO(x)W_\text{occ}(x) = 1 - u_O(x).
  • The raw cost volume ck(x,d)c^k(x,d) is reweighted to produce C1(x,d)C_1(x,d) and C2(x,d)C_2(x,d) via elementwise products.
  • Two dedicated 2D convolutions (conv1\mathrm{conv}_1, conv2\mathrm{conv}_2) are applied, followed by merging and leaky-ReLU activation: coak(x,d)=LReLU(conv1(C1)(x,d)+conv2(C2)(x,d))c_{oa}^k(x,d) = \mathrm{LReLU}\left(\mathrm{conv}_1(C_1)(x,d) + \mathrm{conv}_2(C_2)(x,d)\right) This splitting enables the network to learn different matching filters for visible and occluded regions, akin to a learned self-attention mechanism over the cost volume.

5. Shared Decoder for Flow and Occlusion Estimation

For architectural compactness and consistency, the same decoder is shared across all pyramid levels. This module comprises an 8-layer U-shaped sequence of 3×33\times 3 convolutions (channels: [128→128→128→128→128→96→64→32]), splitting into two prediction heads:

  • A flow head—predicting 2-channel residual flow Δfk\Delta f^k
  • An occlusion head—outputting Ok(x)O^k(x) as a sigmoid map constrained to [0,1][0,1]

Sharing the decoder reinforces hierarchical consistency and decreases network complexity.

6. Optimization and Learning

OAS-Net is trained using a multi-scale L2L_2 endpoint error loss (identical to PWC-Net): L=k=16αkfkfk2L = \sum_{k=1}^6 \alpha_k \Vert f^k - f^{*k} \Vert_2 Here, fkf^{*k} is the downsampled ground-truth flow at level kk. The occlusion map OkO^k is learned implicitly—no explicit ground-truth masks or occlusion-specific losses or regularizers are incorporated.

7. Empirical Performance and Impact

Ablation demonstrates the significance of both the sampling-based correlation and the occlusion module. For Sintel Final/KITTI 2012:

  • Warping, no occlusion: 4.05/4.62
  • Warping, occlusion: 3.98/4.37
  • Sampling, no occlusion: 3.86/4.44
  • Sampling, occlusion: 3.79/4.11

Switching from warping to sampling yields a 4.7% drop in Sintel Final EPE. Incorporating occlusion awareness improves KITTI by 5.4%. Combining both yields the largest improvement: 6.4% (Sintel Final) and 11.0% (KITTI 2012).

On public benchmarks, OAS-Net (6.16M parameters, 0.03 s/frame) achieves:

  • Sintel Clean test EPE: 3.65 (among best for lightweight networks)
  • Sintel Final test EPE: 5.01 (comparable to PWC-Net/IRR-PWC)
  • KITTI 2012 test EPE: 1.4 (ties state-of-the-art)

A plausible implication is that hierarchical warping avoidance combined with explicit occlusion-aware noise suppression constitutes an effective paradigm for robust and efficient optical flow estimation, particularly in lightweight network deployments and scenarios with significant occlusions and fast motions (Kong et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Hierarchical Warping and Occlusion-Aware Noise Suppression.