Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dual Exposure Mode in Imaging

Updated 5 March 2026
  • Dual Exposure Mode is defined as the capture and fusion of under- and over-exposed images to preserve highlight and shadow details.
  • Implementation strategies include time-multiplexed capture, dual-sensor setups, and per-pixel exposure variations to achieve high dynamic range and robust image quality.
  • Applications span HDR imaging, depth sensing, and joint deblurring-denoising, leveraging algorithmic and sensor co-design for improved real-time performance.

Dual Exposure Mode refers to the acquisition, processing, and fusion of two images—one under-exposed and one over-exposed—of the same scene, to extend dynamic range beyond that of a single exposure. This paradigm is foundational in computational photography, hardware ISP design, real-time imaging, and 3D scene understanding, and underpins diverse algorithmic and sensor architectures for HDR, denoising/deblurring, illuminant estimation, and depth sensing. The dual-exposure paradigm is realized through various means: time-multiplexed capture, dual-sensor stereo, per-pixel exposure multiplexing (such as dual-ISO and spatially varying exposure), or hardware pixel architectures (Quad-Bayer, staggered conversion gain). Dual-Exposure Mode subsumes both algorithmic fusion pipelines and sensor control strategies, presenting a range of signal processing and machine learning solutions targeting low-latency, high-fidelity scene rendering.

1. Fundamental Principles of Dual Exposure Mode

Dual Exposure Mode is defined as the acquisition of two co-registered frames—one short (low exposure) and one long (high exposure)—to capture scene content that would be lost to saturation or quantization noise in a single shot. The approach exploits the complementary information: short exposures preserve highlight detail but introduce noise in shadows; long exposures suppress noise in dark areas but can clip highlights due to limited full well capacity of sensor photodiodes.

Key implementation strategies include:

Fusion and interpretation of dual-exposure imagery require co-registration, dynamic range normalization, and scene radiance recovery. The process is widely integrated into HDR imaging, depth estimation, joint denoising-deblurring, and hardware ISP pipelines (Ramakarishnan et al., 2021, Yang et al., 2023, Choi et al., 2024).

2. Algorithmic Fusion Methods for Dual Exposures

Fusion strategies for dual exposures can be broadly categorized into:

  • Transform-domain fusion: Frequency-based techniques (DCT, Fourier) perform image decomposition, coefficient-wise merging, and synthesis, operating on blocks or feature maps (Ramakarishnan et al., 2021, Yang et al., 2023).
  • Spatial-domain fusion: Multi-scale pyramid blending, per-pixel weighted averaging using contrast, saturation, and well-exposedness metrics (as in the Mertens fusion framework) (Kinoshita et al., 2018).
  • Learning-based dual-exposure fusion: Deep CNN architectures accept concatenated dual-exposure inputs, process them with explicit channelwise or spatial decomposition (e.g., LightFuse's GlobalNet and DetailNet), and are trained with data-driven losses for optimal tone reproduction and artifact suppression (Liu et al., 2021, Yang et al., 2023).
  • 3D LUT-based and implicit function fusion: Teacher-student architectures distill per-pixel RGB mappings (LUT grids) parameterized via multi-exposure inputs, providing both high efficiency and real-time editability (Su et al., 2024).
  • Physically-motivated dual-exposure denoising/deblurring: Raw domain fusion leverages the complementary SNR–blur tradeoff in short/long dual-exposed pixels (or dual sensor records), with specialized network architectures for demosaicking, denoising, deblurring, and feature-level fusion (Zhao et al., 2024, Shekarforoush et al., 2023).

A key theoretical insight is that frequency-domain averaging in DCT/DFT space simultaneously achieves the effect of exposure, contrast, and chroma balancing—the DC (mean) coefficient tracks well-exposedness, mid-frequency ACs encode contrast, and cross-channel blocks capture saturation, paralleling explicit weight-based fusion (Ramakarishnan et al., 2021).

3. Sensor and ISP Architectures for Dual Exposure

Sensor-level dual exposure is implemented via:

  • Row/column multiplexing: Dual-ISO sensors interleave high/low gain on alternating rows; spatially varying exposure (SVE) patterns are tiled over the pixel array (Qu et al., 2023, Go et al., 2019).
  • Quad-Bayer and per-pixel exposure: Binning or per-pixel readout assigns variable exposures/gains within macro-pixel blocks; both short- and long-exposure pixels are available in a single raw frame (Zhao et al., 2024).
  • Staggered conversion gain: Some sHDR sensor designs achieve dual exposure via in-pixel charge splitting, enabling simultaneous readout at different gain settings (Afifi et al., 2024).

Sensor control strategies, such as automatic dual-exposure control (ADEC), dynamically adjust exposure brackets in response to detected scene DR, optimizing for fill-in and avoiding saturation or excessive noise (Choi et al., 2024). Optimal multiplexing patterns are selected based on recoverability of clipped pixels (SVE-Risk) rather than naive SNR, and can be efficiently enumerated for a given set of candidate exposure/gain levels (Qu et al., 2023).

ISP pipelines harness the dual-exposure data for demosaicking, local/global contrast enhancement, and dynamic range normalization, often exposing both frames up the pipeline for HDR fusion, denoising, white balancing, and depth recovery (Afifi et al., 2024, Go et al., 2019).

4. Applications Across Imaging Domains

Dual Exposure Mode undergirds a diverse set of imaging tasks:

  • High Dynamic Range Imaging: Classical MEF, transform-based, and learning-based methods produce seamless HDR outputs from two exposures, with state-of-the-art PSNR/SSIM significantly outperforming single-frame solutions and traditional three-exposure bracketing (Ramakarishnan et al., 2021, Liu et al., 2021, Su et al., 2024).
  • Depth and 3D Sensing: Dual-exposure stereo and joint HDR+disparity pipelines manage DR and radiometric overlap for robust stereo matching, using dynamic exposure control and motion-aware feature fusion to maximize depth accuracy under challenging illumination (Choi et al., 2024, Chari et al., 2020).
  • Illuminant Estimation: Dual-exposure features (compact 15-dimensional DEF vectors) track chromatic transformations between exposures, enabling sub-millisecond and sub-kilobyte illuminant estimation models that rival larger single-frame networks (Afifi et al., 2024).
  • Joint Deblurring-Denoising: Dual-exposure captures (or dual-sensor streams) enable physical-model-based architectures that exploit the complementary tradeoff between blur and SNR, delivering significant improvements in denoising, deblurring, and overall perceptual quality in low-light and motion-challenged conditions (Shekarforoush et al., 2023, Zhao et al., 2024).
  • ISP and Embedded Real-time Imaging: Lightweight CNNs, 3D LUTs, and real-time algorithms utilizing dual-exposures provide ultra-fast and resource-conscious solutions suitable for mobile, FPGA, and SoC deployment, maintaining high image quality at extreme computational constraints (Liu et al., 2021, Su et al., 2024, Zhao et al., 2024).

5. Quantitative Assessment and Performance

Dual Exposure Mode methods are evaluated with a range of metrics:

  • Standard metrics: PSNR, SSIM, mean-squared error (IMMSE), Quality Q (HDR-VDP2), TMQI, MEF-SSIM, MI, FMI, VIF, and discrete entropy.
  • Control and fusion runtime: Dual-exposure CNNs (e.g., LightFuse) operate at 0.03 s on GPU, 0.15 s on CPU, and achieve real-time kin with 1.6k parameters; 3D LUT systems run at 110 fps on 4K frames (Liu et al., 2021, Su et al., 2024).
  • DR expansion and error: Dual-exposure 3D/stereo systems expand DR by up to 1.6× without sacrificing depth accuracy; state-of-the-art methods report PSNR increases of >3 dB and SSIM >0.97 compared to prior methods on standard datasets (Choi et al., 2024).
  • Denoising-deblurring performance: Dual-exposure pipelines (e.g., QRNet, joint deblurring-denoising) surpass single-shot and single-exposure peers in both PSNR/SSIM and visual artifact suppression, with notable improvements in extremely adverse SNR and blur regimes (Zhao et al., 2024, Shekarforoush et al., 2023).

6. Limitations, Challenges, and Design Considerations

Dual Exposure Mode presents several design and operational challenges:

  • Registration and motion: Accurate pixel-level or block-level alignment is critical; shots with large inter-frame motion or parallax will yield ghosting or fusion artifacts, especially in non-simultaneous systems (Ramakarishnan et al., 2021).
  • Exposure gap tuning: Excessive EV spacing leads to boundary artifacts and unrecoverable clippings in certain fusion methods; most pipelines recommend ±1–2 EV and dynamic scene analysis (Ramakarishnan et al., 2021, Su et al., 2024).
  • Real-time constraints: For embedded and mobile devices, computational and memory overhead drive the choice toward separable-convolution CNNs, 3D LUT grids, or low-parameter MLPs (Liu et al., 2021, Su et al., 2024, Afifi et al., 2024).
  • Sensor limitations: Simultaneous dual exposure (without motion ghosts) requires per-pixel architectures or hardware interleaving; split-ROW dual-ISO and Quad-Bayer patterns are limited by iso resolution and sensor design (Qu et al., 2023, Zhao et al., 2024, Go et al., 2019).
  • Illuminant estimation and color constancy: Dual-exposure approaches deliver compact, high-accuracy estimators but remain challenged in mixed-illuminant environments and when cross-sensor variations are present (Afifi et al., 2024).
  • Spatial vs. frequency fusion tradeoff: Methods that operate only in the spatial domain may lack global consistency; hybrid spatial-frequency (DCT, Fourier, attention) integrations address these, at possible increased compute cost (Ramakarishnan et al., 2021, Yang et al., 2023).

7. Future Directions and Open Problems

Continued progress in Dual Exposure Mode is driven by:

  • Unified joint tasks: Integration of HDR fusion, denoising, deblurring, demosaicking, depth, and white balance into a single pipeline leveraging dual exposures (Zhao et al., 2024, Shekarforoush et al., 2023).
  • Hardware-algorithm co-design: Adaptive exposure/gain multiplexing patterns optimized for both signal recoverability and downstream neural network performance, exploiting SVE-Risk and supporting cross-algorithm universality (Qu et al., 2023).
  • Embedded, adaptive deployment: Maximizing quality-versus-latency tradeoffs for real-time platforms via quantization, LUT compression, and dynamic resource management (Liu et al., 2021, Su et al., 2024).
  • Extreme scenes and challenging illumination: Developing fusion strategies robust to fast motion, flash/sparkle, nonuniform lighting, and scenes with complex spatially varying illuminants.
  • Extending beyond dual exposures: Generalizing approaches for flexible N-exposure fusion while retaining the low-latency and low-artifact properties of dual-exposure pipelines (Yang et al., 2023, Su et al., 2024).

Dual Exposure Mode continues to serve as a technically rich and fertile ground for joint algorithm and hardware advances in scientific imaging, computational photography, and scene understanding.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dual Exposure Mode.