Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data (1903.10683v3)

Published 26 Mar 2019 in cs.CV and cs.MM

Abstract: This paper presents a new method for shadow removal using unpaired data, enabling us to avoid tedious annotations and obtain more diverse training samples. However, directly employing adversarial learning and cycle-consistency constraints is insufficient to learn the underlying relationship between the shadow and shadow-free domains, since the mapping between shadow and shadow-free images is not simply one-to-one. To address the problem, we formulate Mask-ShadowGAN, a new deep framework that automatically learns to produce a shadow mask from the input shadow image and then takes the mask to guide the shadow generation via re-formulated cycle-consistency constraints. Particularly, the framework simultaneously learns to produce shadow masks and learns to remove shadows, to maximize the overall performance. Also, we prepared an unpaired dataset for shadow removal and demonstrated the effectiveness of Mask-ShadowGAN on various experiments, even it was trained on unpaired data.

Citations (178)

Summary

  • The paper presents a novel GAN framework that leverages shadow masks to remove shadows from unpaired images.
  • It employs cycle-consistency constraints and adversarial learning to accurately reconstruct shadow details while preserving textures.
  • Experimental results on USR, SRD, and ISTD datasets demonstrate improved performance over traditional paired-data approaches.

An Overview of Mask-ShadowGAN for Shadow Removal from Unpaired Data

The paper entitled "Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data" presents a novel approach to shadow removal leveraging unpaired datasets. Unlike traditional shadow removal methods, which rely heavily on supervised learning with paired shadow-shadow-free images, Mask-ShadowGAN introduces a framework that circumvents these limitations by employing a generative adversarial network (GAN) architecture specifically adapted for unpaired data.

Problem Context

Shadow removal is a challenging computer vision task due to the variability in shadow shapes and backgrounds. Conventional approaches, requiring paired data obtained through manual image capture processes, face limitations in data diversity and practical applicability. Additionally, inconsistencies in color and luminosity between paired images arise due to environmental and camera condition changes. Mask-ShadowGAN aims to address these issues by utilizing unpaired datasets, thus facilitating more diverse and extensive training data without cumbersome acquisition procedures.

Method Overview

The proposed framework, Mask-ShadowGAN, leverages adversarial learning and reformulated cycle-consistency constraints to unveil the complex relationships between shadowed and shadow-free domains. The key innovation involves incorporating shadow masks, which guide the generation process to produce realistic shadow images from shadow-free inputs and vice versa. This approach moves beyond simple one-to-one mapping, accommodating the multiple shadow shapes and positions that can exist over the same background.

Mask-ShadowGAN consists of two primary components:

  • Learning from Shadow Images: Involves generating shadow-free images from real shadow images and reconstructing them back into shadow images using learned shadow masks, thereby preserving shadow shapes and positions.
  • Learning from Shadow-Free Images: Utilizes shadow masks derived from real shadow images, facilitating the generation of diverse shadow images from shadow-free ones, effectively broadening the set of training shadows.

Dataset and Experimental Validation

The authors introduce an innovative unpaired dataset, the Unpaired Shadow Removal (USR) dataset, containing thousands of diverse shadow and shadow-free images across different scenes. Experiments demonstrate Mask-ShadowGAN's effectiveness on this dataset, outperforming several state-of-the-art methods, including both traditional techniques and deep-learning based approaches trained with paired data.

Quantitative evaluations utilizing the SRD and ISTD datasets further illustrate the method's capability, achieving competitive RMSE scores compared to existing methodologies. Visual comparisons highlight Mask-ShadowGAN's proficiency in preserving texture details and producing realistic outputs without the artifacts frequently associated with GAN-generated images when solely adversarial loss is used.

Implications and Future Directions

This research underscores the practicality of using unpaired data for complex image translation tasks, suggesting broader applications beyond shadow removal, such as synthesizing other environmental conditions (like rain or snow) or performing object removal or addition tasks dynamically. The exploration of mask-guided GANs opens avenues for manipulation and editing, with potential impacts on how visual content is processed and enhanced in computer vision.

Future developments could extend this paradigm to increasingly complex environmental scenarios, incorporating more advanced generative techniques to improve shadow mask generation and efficacy, and further elevate GAN usage in real-life applications where data pairing is impractical or impossible.