Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Auto-Exposure Fusion for Single-Image Shadow Removal (2103.01255v2)

Published 1 Mar 2021 in cs.CV

Abstract: Shadow removal is still a challenging task due to its inherent background-dependent and spatial-variant properties, leading to unknown and diverse shadow patterns. Even powerful state-of-the-art deep neural networks could hardly recover traceless shadow-removed background. This paper proposes a new solution for this task by formulating it as an exposure fusion problem to address the challenges. Intuitively, we can first estimate multiple over-exposure images w.r.t. the input image to let the shadow regions in these images have the same color with shadow-free areas in the input image. Then, we fuse the original input with the over-exposure images to generate the final shadow-free counterpart. Nevertheless, the spatial-variant property of the shadow requires the fusion to be sufficiently `smart', that is, it should automatically select proper over-exposure pixels from different images to make the final output natural. To address this challenge, we propose the shadow-aware FusionNet that takes the shadow image as input to generate fusion weight maps across all the over-exposure images. Moreover, we propose the boundary-aware RefineNet to eliminate the remaining shadow trace further. We conduct extensive experiments on the ISTD, ISTD+, and SRD datasets to validate our method's effectiveness and show better performance in shadow regions and comparable performance in non-shadow regions over the state-of-the-art methods. We release the model and code in https://github.com/tsingqguo/exposure-fusion-shadow-removal.

Citations (121)

Summary

  • The paper recasts shadow removal as an exposure fusion task by generating multiple over-exposure images to effectively address spatial-variant shadow effects.
  • It introduces a dual-network approach with Shadow-Aware FusionNet for pixel-level fusion and Boundary-Aware RefineNet to refine penumbra regions.
  • Experimental results on ISTD, ISTD+, and SRD datasets show notable RMSE improvements, highlighting potential for future video shadow removal research.

Auto-Exposure Fusion for Single-Image Shadow Removal: An Overview

The paper "Auto-Exposure Fusion for Single-Image Shadow Removal" by Lan Fu et al. addresses the ongoing challenge of removing shadows from single images, proposing an innovative approach based on exposure fusion. This method redefines the shadow removal problem by treating it as an exposure fusion task, wherein multiple over-exposure versions of the shadow images are generated to counter the inherent spatial-variant color discrepancies caused by shadows.

Methodology and Key Contributions

The paper introduces a novel framework consisting of two main components: the shadow-aware FusionNet and the boundary-aware RefineNet, which are pivotal in effectively eliminating shadows while preserving the natural appearance of the image.

  1. Shadow-Aware FusionNet: At the core of this work is a deep learning model designed to handle the intricacies of shadowed regions by employing smart fusion strategies. The idea is to generate multiple over-exposure images and utilize a per-pixel fusion mechanism to assemble these into a shadow-free output. This process is critical, given the spatial-variant nature of shadows, which necessitates versatile corrections for different image areas.
  2. Boundary-Aware RefineNet: Another significant contribution lies in addressing the often problematic penumbra regions—areas with partially shadowed boundaries. The RefineNet leverages a boundary mask that focuses on refining these transitional edges, ensuring a seamless blending with the rest of the image. Combined with a boundary-aware loss function, this enhances the overall fidelity and aesthetic quality of the de-shadowed images.

Experimental Results

The authors conduct extensive evaluations using prominent datasets such as ISTD, ISTD+, and SRD to validate the efficacy of their approach. Their results indicate a marked improvement in shadow removal performance compared to existing state-of-the-art techniques. Specifically, the proposed method achieves a notable reduction in root mean square error (RMSE) in shadow-affected regions, particularly highlighting the seasonal challenges associated with diverse shadow patterns.

Implications and Future Prospects

By recasting shadow removal as an exposure fusion task, this research presents a paradigm shift in handling shadowed images. The implications of this work extend to various practical applications in computer vision, where shadow effects can significantly hinder tasks like object recognition and semantic segmentation. Beyond the immediate improvements in shadow removal, such an approach could inspire further research into exposure-based correction techniques for other image artifacts.

Looking forward, the authors suggest extending their method's principles to video shadow removal, which poses additional temporal consistency challenges. This offers a promising avenue for further exploration, particularly with the increasing demand for high-quality visual content in dynamic environments.

Overall, the paper provides a robust and innovative solution for single-image shadow removal, advancing the field's current capabilities and setting a foundation for future research in exposure-based image correction methodologies.