Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 96 tok/s
Gemini 3.0 Pro 52 tok/s Pro
Gemini 2.5 Flash 159 tok/s Pro
Kimi K2 203 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Haze-Aware Vector Field

Updated 13 November 2025
  • Haze-aware vector field is a parameterized model that encodes atmospheric effects like scattering, color shift, and attenuation for simulation and dehazing tasks.
  • It employs numerical integration (RK4) along with CNN and LUT strategies to achieve stable, high-fidelity image reconstruction with improved PSNR metrics.
  • In 3D volumetric applications, the framework disentangles haze from surface reflectance, enabling accurate scene geometry and novel view synthesis under adverse atmospheric conditions.

A haze-aware vector field is a parameterized vector field or collection of spatially indexed functions that encode the effects of atmospheric haze—scattering, color shift, and attenuation—on either 2D images or 3D volumetric representations. The haze-aware vector field bridges physical scattering models with data-driven or neural architectures, enabling both forward modeling (simulation of haze) and inverse tasks (dehazing, 3D reconstruction) in computer vision. Recent works implement this vector field as either (1) a function over images that guides dehazing via ODE-based flow matching, or (2) a set of neural fields embedded within NeRF-style volumetric scene representations, parameterizing haze properties such as volumetric density and color at each spatial location.

1. Mathematical Formulation in Image Dehazing

The haze-aware vector field in 2D image dehazing is formalized as a continuous-time, spatially indexed flow transporting a hazy image toward its clear counterpart. Specifically, given a hazy observation X0X_0 and a clean target X1X_1, the dehazing transformation is posed as an initial value problem:

dX(t)dt=F(t,X(t);θ),X(0)=X0,X(1)X1\frac{dX(t)}{dt} = F(t, X(t); \theta), \quad X(0) = X_0, \quad X(1) \approx X_1

where F:[0,1]×RH×W×3RH×W×3F: [0, 1] \times \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^{H \times W \times 3} is the haze-aware vector field. In 4KDehazeFlow (Chen et al., 12 Nov 2025), FF is decomposed as

F(t,X;θ)=Fpurify(t,X;θp)+λFLUT(t,X;θLUT)F(t, X; \theta) = F_{\rm purify}(t, X; \theta_p) + \lambda F_{\rm LUT}(t, X; \theta_{\rm LUT})

  • FpurifyF_{\rm purify}: per-pixel “atmospheric-scattering purifier” CNN that predicts a residual toward haze removal, following an AODNet-style form.
  • FLUTF_{\rm LUT}: a residual derived from a trainable 3D lookup table (LUT) encoding a nonlinear, data-driven color correction.
  • λ\lambda: scalar weight (empirically set to 0.5).

The vector field thus flexibly parameterizes local haze effects via CNN-learned multiplicative-bias fields, and global-nonlinear color distortions via the LUT.

2. Numerical Integration and Optimization

The haze-aware vector field, formulated as an ODE, is numerically integrated with the classical fourth-order Runge–Kutta (RK4) method for stability and accuracy. This proceeds via:

k1=F(ti,Xi;θ) k2=F(ti+Δt2,Xi+Δt2k1;θ) k3=F(ti+Δt2,Xi+Δt2k2;θ) k4=F(ti+Δt,Xi+Δtk3;θ) Xi+1=Xi+Δt6(k1+2k2+2k3+k4)\begin{aligned} k_1 &= F(t_i, X_i;\theta) \ k_2 &= F\left(t_i+\frac{\Delta t}{2}, X_i+\frac{\Delta t}{2}k_1; \theta\right) \ k_3 &= F\left(t_i+\frac{\Delta t}{2}, X_i+\frac{\Delta t}{2}k_2; \theta\right) \ k_4 &= F(t_i+\Delta t, X_i+\Delta t k_3; \theta) \ X_{i+1} &= X_i + \frac{\Delta t}{6}(k_1 + 2k_2 + 2k_3 + k_4) \end{aligned}

Training proceeds on paired (X0,X1)(X_0, X_1) samples with:

  • Flow-matching loss: Lflow=EF(t,Xt;θ)(X1X0)22\mathcal{L}_{\rm flow} = \mathbb{E} \| F(t, X_t; \theta) - (X_1 - X_0) \|_2^2 at interpolated images XtX_t;
  • RK4-integrated reconstruction loss Lrec=X^1X11\mathcal{L}_{\rm rec} = \| \widehat{X}_1 - X_1 \|_1;
  • Combined objective L=Lflow+αLrec\mathcal{L} = \mathcal{L}_{\rm flow} + \alpha \mathcal{L}_{\rm rec}, with α=1\alpha = 1.

The integration yields a stable haze removal trajectory, with ablation showing that replacing RK4 by Euler integration degrades PSNR by 3 dB and leads to less stable results. Removal of the LUT branch causes >6>6 dB PSNR drop and strong color bias.

3. Vector Fields in Volumetric 3D Representations

In neural volumetric scene reconstruction (e.g., NeRF), a haze-aware vector field generalizes to a tuple of functions defined over 3D coordinates:

c(p,d):view-dependent surface color field σ(p):opaque volume density (surface absorption) σs(p):scattering coefficient field (haze density) cs(p):airlight color field (haze color contribution)\begin{aligned} &c(\mathbf{p}, \mathbf{d}): \text{view-dependent surface color field} \ &\sigma(\mathbf{p}): \text{opaque volume density (surface absorption)} \ &\sigma_s(\mathbf{p}): \text{scattering coefficient field (haze density)} \ &c_s(\mathbf{p}): \text{airlight color field (haze color contribution)} \end{aligned}

In DehazeNeRF (Chen et al., 2023), these fields are each parameterized by neural networks (MLPs), with surface and haze properties represented at differing spatial-frequency scales—surface fields are high-frequency, while haze fields are low-frequency, reflecting the spatial smoothness of atmospheric scattering. The total radiance per pixel, per viewing ray, is computed via an extended radiative transfer equation:

C(r,d)=Csurface+ChazeC(r, \mathbf{d}) = C_{\mathrm{surface}} + C_{\mathrm{haze}}

where

Csurface=c(r(t),d)σ(r(t))Tσ+σs(t)dtC_{\mathrm{surface}} = \int c(r(t), \mathbf{d}) \sigma(r(t)) T_{\,\sigma+\sigma_s}(t) \, dt

Chaze=cs(r(t))σs(r(t))Tσ+σs(t)dtC_{\mathrm{haze}} = \int c_s(r(t)) \sigma_s(r(t)) T_{\,\sigma+\sigma_s}(t) \, dt

with Tσ+σs(t)T_{\,\sigma+\sigma_s}(t) the transmittance along the ray.

This haze-aware vector field allows the model to disentangle haze-induced image degradations from scene reflectance and geometry, enabling accurate haze removal and 3D reconstruction.

4. Parameterization Strategies: CNNs, LUTs, and Neural Fields

The haze-aware vector field’s parameterization adapts to task domain:

  • 2D Image Domain (Chen et al., 12 Nov 2025):
    • “Purifier” CNN: Predicts per-pixel scale and bias maps (AODNet style).
    • 3D-LUT: HRM×M×M×3H \in \mathbb{R}^{M \times M \times M \times 3}, typically M=33M=33. Trilinear interpolation yields smooth, nonlinear color corrections per pixel.
  • 3D Volumetric Domain (Chen et al., 2023, Li et al., 2023):
    • Surface SDF MLP (e.g., eight layers, 256 channels).
    • Surface Color MLP – view-adaptive, fed position, direction, SDF gradient.
    • Scattering/Airlight MLPs – low-complexity “band-limited” coordinate nets with sine or Bacon activations.

For 2D LUTs, per-pixel RGBs are normalized and mapped by trilinear interpolation among M3M^3 LUT entries. In 3D NeRF-based systems, each function is trained end-to-end with suitable priors and regularization.

5. Regularization and Disentanglement

Effective haze-aware vector fields require inductive biases and regularizers to separate haze effects from surface attributes:

  • Koschmieder consistency loss: Matches rendered surface and haze terms to analytic predictions.
  • Dark Channel Prior: Promotes low minimum channel values in clear-view renderings.
  • Photo-consistency loss: Standard NeRF L1L_1 color matching.
  • Eikonal loss: Constrains SDF gradients to unit norm for sharp geometry.
  • Atmospheric Consistency Loss (Li et al., 2023): Forces global scattering parameters (A, β) to agree across views.

This suite of regularizations ensures that the vector field's haze components model spatially smooth, low-frequency scattering while geometric and radiance fields remain crisp and high-frequency.

6. Quantitative Performance and Practical Impact

Haze-aware vector field approaches have demonstrated superior quantitative and qualitative performance on standardized image dehazing and 3D reconstruction benchmarks.

  • 4KDehazeFlow (Chen et al., 12 Nov 2025):
    • Achieves a PSNR of 21.62 dB on UHD datasets, +2+2 dB over prior art.
    • LPIPS perceptual similarity metric of $0.3124$, lowest among UHD methods.
    • Removing the LUT leads to >6>6 dB PSNR loss and strong blue bias.
    • RK4 integration critical: Euler method reduces PSNR by $3$ dB.
    • Inference speed: $0.15$ seconds per 4K image (vs. $450$ s for diffusion-based models).
  • DehazeNeRF (Chen et al., 2023):
    • Sharper geometry, improved haze removal, and consistent novel view synthesis under adverse weather.
    • Effective partitioning of volume density network into “solid” (high, crisp) and “air” (low, diffuse) components, automatic through reconstruction and contrast discriminative losses.

This suggests that haze-aware vector field models offer practical, scalable, and physically interpretable frameworks for haze removal and scene understanding, outperforming both priors-based and naïve neural models in high-fidelity settings.

7. Applications and Research Directions

Haze-aware vector fields are central in:

  • Ultra-High-Definition (UHD) Image Dehazing: Real-time artifact-free dehazing for surveillance, autonomous vehicles, and remote sensing.
  • Physically Consistent 3D Scene Reconstruction: Novel view synthesis, 3D mapping under atmospheric degradation.
  • Inverse Rendering under Atmospheric Scattering: Enabling physically faithful scene recovery when haze parameters and scene geometry are unknown.
  • Data-Driven Atmospheric Correction: Embedding color and scattering transformations in network architectures for improved visual quality and reconstruction.

Ongoing research explores more general atmospheric models, real-world unpaired training, tighter integration of physical priors, and joint learning of scene and environmental parameters in both 2D and 3D modalities.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Haze-Aware Vector Field.