Hybrid Motion-Adaptive Lighting Filter
- Hybrid Motion-Adaptive Lighting Smoothing Filter is a video module that reduces flicker in HDR and relit videos by combining motion estimation with edge-preserving filters.
- It integrates optical flow-based warping, adaptive temporal blending, and bilateral filtering to achieve temporally stable outputs without sacrificing spatial detail.
- Quantitative evaluations and user studies demonstrate significant improvements in lighting stability and SSIM, with minimal motion blur and ghosting artifacts.
A Hybrid Motion-Adaptive Lighting Smoothing Filter (HMA-LSF) is a video processing module designed to address temporal instability (flicker) during relighting or in high-dynamic-range (HDR) video fusion, while strictly preserving spatial detail and minimizing artifacts such as motion blur and ghosting. Such filters combine motion estimation, adaptive temporal blending, and edge-preserving spatial filtering to yield high-fidelity, temporally stable outputs. In contemporary state-of-the-art frameworks such as Hi-Light, HMA-LSF operates as a critical component in relit video stabilization, and parallel developments have adapted the concept to hybrid event-frame data fusion for HDR reconstruction (Liu et al., 30 Jan 2026, Wang et al., 2023).
1. Motivation and Problem Scope
Video relighting and HDR reconstruction frequently employ per-frame inference pipelines—such as diffusion models or framewise event integration—that exhibit noticeable temporal fluctuation in localized luminance, manifesting as unsettling flicker. Simple temporal smoothing (e.g., naive averaging) can suppress flicker but at the expense of spatial sharpness; moving objects acquire motion blur, and edge detail can be lost or contaminated by “ghosts” from adjacent frames. The challenge is to construct a filter that enforces inter-frame lighting consistency while retaining local detail and respecting content motion.
Within Hi-Light (Liu et al., 30 Jan 2026), HMA-LSF is introduced after the guided relighting diffusion step, on a downsampled (480p) intermediate video, before detail restoration at the final output stage. Analogous concepts arise in the asynchronous Kalman Filter (AKF) for hybrid event-frame cameras (Wang et al., 2023), where fusion of asynchronous event streams and synchronous frames demands motion- and content-adaptive temporal filtering.
2. Core Methodology and Mathematical Formulation
The HMA-LSF in Hi-Light executes the following sequence per frame:
- Optical Flow Estimation: Farneback’s method (as implemented in OpenCV) computes a dense flow field between the previous and current intermediate relit frames, enforcing the brightness-constancy constraint:
where is the relit frame at time .
- Motion-Adaptive Blending Weight: For each pixel, a temporal smoothing coefficient is defined as:
where is a tunable parameter. Higher flow magnitude (faster motion) suppresses reliance on past frames to avoid ghosting.
- Motion-Compensated Blending: The previous smoothed frame, , is warped according to flow to yield and linearly blended with the current frame:
- Optional Windowed Blending: Longer-range blending across previous frames with recency weights can further stabilize lighting:
- Edge-Preserving Bilateral Filtering: To remove compression noise or residual flicker without degrading edges, a bilateral filter is applied:
where and denote spatial and range Gaussian kernels, respectively.
A concise per-frame pseudo-code for this process is included in (Liu et al., 30 Jan 2026). The analogous asynchronous Kalman filter architecture in event-frame fusion (Wang et al., 2023) employs a pixelwise state-space formulation, with log-brightness as the latent state, and a Kalman gain that adapts locally based on uncertainty estimates from events and frames, effectively interpolating between temporal stability and motion acuity.
3. Parameterization and Algorithmic Considerations
Key tunable parameters in the HMA-LSF include:
- Smoothing Strength (): Governs the rate at which blending weight decays with flow magnitude; higher values preserve moving-region sharpness at the cost of less smoothing in dynamic scenes ( typical).
- Bilateral Filter Parameters (): Define the filter’s spatial window and intensity sensitivity, balancing noise reduction and edge preservation (e.g., , ).
- Window Size () and Decay Weights (): Determine memory depth and smoothing, with trade-offs between stability and temporal lag. Larger provides extra robustness but increases response latency.
In the AKF architecture for event-frame fusion (Wang et al., 2023), relevant parameters include uncertainty models for events () and frames (), per-pixel filter covariance initialization, and contrast threshold calibration for event integration. Dynamic adaptation of the Kalman gain ensures motion- and exposure-adaptive smoothing.
4. Quantitative Evaluation and Empirical Results
The Hi-Light paper (Liu et al., 30 Jan 2026) presents extensive comparative experiments with 100 video clips. Incorporation of HMA-LSF into the intermediate relit video produces:
- Light Stability Score (SLS) improvement from ≈0.28 (best baseline) to ≈0.51 (+80%), quantifying flicker reduction via analysis of bright-pixel count, average intensity, and temporal derivatives.
- SSIM for detail preservation remains high (≈0.94 post-HMA-LSF vs. 0.60 for the best baseline), demonstrating negligible introduction of blur.
- Frequency domain Fourier analysis confirms preservation of high-frequency spectral content after filtering.
- Time-series plots of mean bright-pixel intensity show reduction of framewise swings from ±5–10 units (pre-filter) to ±1–2 units (post-filter), and suppression of peak frame-to-frame differences (from >0.4 to near zero).
- Human study ranks Hi-Light (with HMA-LSF) highest in 95.6% of light-stability judgments.
In the context of event-frame video fusion (Wang et al., 2023), the AKF achieves ≈69% reduction in absolute-intensity MSE, ≈35–55% improvement in SSIM, and ≈25–50% improvement in HDR-Q-score relative to prior methods.
| System | SLS (↑) | SSIM (↑) | Flicker Suppression | Sharpness |
|---|---|---|---|---|
| Hi-Light+HMA-LSF | ~0.51 | 0.94 | Yes | Yes |
| Best Baseline | ~0.28 | 0.60 | Partial | No |
| AKF (Wang et al., 2023) | — | 0.86 | Yes (HDR videos) | Yes |
5. Visual and Qualitative Outcomes
Qualitative assessment in (Liu et al., 30 Jan 2026) demonstrates:
- Without HMA-LSF: Video relighting exhibits high-amplitude temporal inconsistency in local intensity, leading to obvious flicker and visible changes in highlight and shadow detail frame-to-frame.
- With HMA-LSF: Time-series of pixel-wise intensity are smoothed, temporal variation is suppressed to within ±1–2 units, and object boundaries remain sharp even during fast motion. Video sequences such as “sunset lighting” avoid flickering highlights and preserve clarity of fast-moving foliage.
- The bilateral filter stage eliminates residual compression or diffusion-induced noise without introducing geometric artifacts.
For HDR video from event-frame cameras (Wang et al., 2023), AKF-based smoothing achieves artifact-free HDR reconstructions that are temporally consistent and spatially sharp, particularly in challenging lighting and motion regimes.
6. Broader Context and Related Methodologies
The conceptual framework of motion-adaptive smoothing filters, as exemplified by HMA-LSF and AKF, generalizes across both conventional frame-based video processing (temporal filtering post-diffusion, post-editing) and hybrid sensor fusion (asynchronous event-camera data with frames). Both leverage per-pixel or per-region motion cues to adapt the degree of temporal integration and combine this with spatially selective filtering to avoid edge blur. The approach is notably superior to naive temporal averaging or global filtering, as it respects scene dynamics and local structure.
A plausible implication is the applicability of such filters in real-time embedded vision, given their computational efficiency (per-frame or per-event complexity), as demonstrated in (Wang et al., 2023). Furthermore, the methodology can be extended to other domains requiring temporal stabilization without loss of spatial detail, such as denoising, frame interpolation, or video style transfer.
7. Summary
The Hybrid Motion-Adaptive Lighting Smoothing Filter combines optical flow–guided warping, motion-adaptive temporal blending, and edge-aware bilateral filtering to achieve robust, high-fidelity, temporally stable lighting in video. Its integration into relighting systems such as Hi-Light enables a leap in temporal quality without sacrificing spatial resolution or inducing artifacts, as verified by both traditional metrics (SLS, SSIM) and user studies (Liu et al., 30 Jan 2026). In hybrid sensor systems, conceptually allied motion-adaptive Kalman filters further confirm the generality and efficacy of hybrid motion-adaptive lighting smoothing strategies (Wang et al., 2023).