Deshadow-Anything: When Segment Anything Model Meets Zero-shot shadow removal (2309.11715v3)
Abstract: Segment Anything (SAM), an advanced universal image segmentation model trained on an expansive visual dataset, has set a new benchmark in image segmentation and computer vision. However, it faced challenges when it came to distinguishing between shadows and their backgrounds. To address this, we developed Deshadow-Anything, considering the generalization of large-scale datasets, and we performed Fine-tuning on large-scale datasets to achieve image shadow removal. The diffusion model can diffuse along the edges and textures of an image, helping to remove shadows while preserving the details of the image. Furthermore, we design Multi-Self-Attention Guidance (MSAG) and adaptive input perturbation (DDPM-AIP) to accelerate the iterative training speed of diffusion. Experiments on shadow removal tasks demonstrate that these methods can effectively improve image restoration performance.
- “Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1788–1797.
- “Shadow removal via shadow image decomposition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8578–8587.
- “Direction-aware spatial context features for shadow detection and removal,” TPAMI, vol. 42, pp. 2795–2808, 2019.
- “Ris-gan: Explore residual and illumination with generative adversarial networks for shadow removal,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 12829–12836.
- “Towards ghost-free shadow removal via dual hierarchical aggregation network and shadow matting gan,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 10680–10687.
- “Shadowformer: Global context helps image shadow removal,” arXiv preprint arXiv:2302.01650, 2023.
- “Bijective mapping network for shadow removal,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5627–5636.
- “Spa-former: An effective and lightweight transformer for image shadow removal,” in International Joint Conference on Neural Networks, 2023, pp. 1–8.
- “Mask-shadowgan: Learning to remove shadows from unpaired data,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 2472–2481.
- “Shadow removal by a lightness-guided network with training on unpaired data,” TIP, vol. 30, 2021.
- “From shadow generation to shadow removal,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4927–4936.
- “Unsupervised shadow removal using target consistency generative adversarial network,” arXiv preprint arXiv:2010.01291, 2020.
- “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
- “Input perturbation reduces exposure bias in diffusion models,” arXiv preprint arXiv:2301.11706, 2023.
- “Inpaint anything: Segment anything meets image inpainting,” arXiv preprint arXiv:2304.06790, 2023.
- “Shadow removal using bilateral filtering,” TIP, vol. 21, pp. 4361–4368, 2012.
- “Paired regions for shadow detection and removal,” TPAMI, vol. 35, pp. 2956–2967, 2012.
- “Interactive shadow removal and ground truth for variable scene categories.,” in Bmvc, 2014, pp. 1–11.
- “Deshadownet: A multi-context embedding deep network for shadow removal,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4067–4075.
- “Auto-exposure fusion for single-image shadow removal,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10571–10580.
- “Canet: A context-aware network for shadow removal,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 4743–4752.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.