Papers
Topics
Authors
Recent
2000 character limit reached

Unveiling Deep Shadows: A Survey and Benchmark on Image and Video Shadow Detection, Removal, and Generation in the Deep Learning Era

Published 3 Sep 2024 in cs.CV, cs.GR, and cs.MM | (2409.02108v2)

Abstract: Shadows are created when light encounters obstacles, resulting in regions of reduced illumination. In computer vision, detecting, removing, and generating shadows are critical tasks for improving scene understanding, enhancing image quality, ensuring visual consistency in video editing, and optimizing virtual environments. This paper offers a comprehensive survey and evaluation benchmark on shadow detection, removal, and generation in both images and videos, focusing on the deep learning approaches of the past decade. It covers key aspects such as tasks, deep models, datasets, evaluation metrics, and comparative results under consistent experimental settings. Our main contributions include a thorough survey of shadow analysis, the standardization of experimental comparisons, an exploration of the relationships between model size, speed, and performance, a cross-dataset generalization study, the identification of open challenges and future research directions, and the provision of publicly available resources to support further research in this field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (198)
  1. F. C. Crow, “Shadow algorithms for computer graphics,” SIGGRAPH, vol. 11, no. 2, pp. 242–248, 1977.
  2. R. B. Irvin and D. M. McKeown, “Methods for exploiting the relationship between buildings and their shadows in aerial imagery,” IEEE Trans. Syst. Man Cybern., vol. 19, no. 6, pp. 1564–1575, 1989.
  3. J. M. Scanlan, D. M. Chabries, and R. W. Christiansen, “A shadow detection and removal algorithm for 2-D images,” in ICASSP, 1990, pp. 2057–2060.
  4. C. Jiang and M. O. Ward, “Shadow identification,” in CVPR, 1992, pp. 606–607.
  5. P. L. Rosin and T. J. Ellis, “Image difference threshold strategies and shadow detection.” in BMVC, vol. 95, 1995, pp. 347–356.
  6. G. Funka-Lea and R. Bajcsy, “Combining color and geometry for the active, visual recognition of shadows,” in ICCV, 1995, pp. 203–209.
  7. E. Salvador and T. Ebrahimi, “Cast shadow recognition in color images,” in EUSIPCO, 2002, pp. 1–4.
  8. G. D. Finlayson, S. D. Hordley, and M. S. Drew, “Removing shadows from images,” in ECCV, 2002, pp. 823–836.
  9. A. Prati, I. Mikic, M. M. Trivedi, and R. Cucchiara, “Detecting moving shadows: algorithms and evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 7, pp. 918–923, 2003.
  10. E. Salvador, A. Cavallaro, and T. Ebrahimi, “Spatio-temporal shadow segmentation and tracking,” in Image Video Commun. Process., 2003, pp. 389–400.
  11. S. Nadimi and B. Bhanu, “Physical models for moving shadow and object detection in video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 8, pp. 1079–1087, 2004.
  12. E. Salvador, A. Cavallaro, and T. Ebrahimi, “Cast shadow segmentation using invariant color features,” Comput. Vis. Image Underst., vol. 95, no. 2, pp. 238–259, 2004.
  13. G. D. Finlayson, S. D. Hordley, C. Lu, and M. S. Drew, “On the removal of shadows from images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 1, pp. 59–68, 2006.
  14. T.-P. Wu, C.-K. Tang, M. S. Brown, and H.-Y. Shum, “Natural shadow matting,” ACM Trans. Graph., vol. 26, no. 2, p. 8, 2007.
  15. F. Liu and M. Gleicher, “Texture-consistent shadow removal,” in ECCV, 2008, pp. 437–450.
  16. G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis., vol. 85, no. 1, pp. 35–57, 2009.
  17. J.-F. Lalonde, A. A. Efros, and S. G. Narasimhan, “Detecting ground shadows in outdoor consumer photographs,” in ECCV, 2010, pp. 322–335.
  18. J. Zhu, K. G. G. Samuel, S. Z. Masood, and M. F. Tappen, “Learning to recognize shadows in monochromatic natural images,” in CVPR, 2010, pp. 223–230.
  19. R. Guo, Q. Dai, and D. Hoiem, “Single-image shadow detection and removal using paired regions,” in CVPR, 2011, pp. 2033–2040.
  20. X. Huang, G. Hua, J. Tumblin, and L. Williams, “What characterizes a shadow boundary under the sun and sky?” in ICCV, 2011, pp. 898–905.
  21. R. Guo, Q. Dai, and D. Hoiem, “Paired regions for shadow detection and removal,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 12, pp. 2956–2967, 2013.
  22. H. Gong and D. P. Cosker, “Interactive shadow removal and ground truth for variable scene categories,” in BMVC, 2014, pp. 1–11.
  23. T. F. Y. Vicente, M. Hoai, and D. Samaras, “Leave-one-out kernel optimization for shadow detection,” in ICCV, 2015, pp. 3388–3396.
  24. M. Gryka, M. Terry, and G. J. Brostow, “Learning to remove soft shadows,” ACM Trans. Graph., vol. 34, no. 5, p. 153, 2015.
  25. S. H. Khan, M. Bennamoun, F. Sohel, and R. Togneri, “Automatic feature learning for robust shadow detection,” in CVPR, 2014, pp. 1939–1946.
  26. X. Hu, “Shadow detection and removal with deep learning,” Ph.D. dissertation, The Chinese University of Hong Kong (Hong Kong), 2020.
  27. S. K. Alavipanah, M. Karimi Firozjaei, A. Sedighi, S. Fathololoumi, S. Zare Naghadehi, S. Saleh, M. Naghdizadegan et al., “The shadow effect on surface biophysical variables derived from remote sensing: a review,” Land, vol. 11, no. 11, 2022.
  28. A. Woo, P. Poulin, and A. Fournier, “A survey of shadow algorithms,” IEEE Comput. Graph. Appl., vol. 10, no. 6, pp. 13–32, 1990.
  29. A. Prati, R. Cucchiara, I. Mikic, and M. M. Trivedi, “Analysis and detection of shadows in video streams: a comparative evaluation,” in CVPR, 2001, pp. II–517–II–576.
  30. A. Prati, I. Mikic, R. Cucchiara, M. M. Trivedi et al., “Comparative evaluation of moving shadow detection algorithms,” in CVPR workshop on Empirical Evaluation Methods in Computer Vision, 2001, pp. 1–8.
  31. N. Al-Najdawi, H. E. Bez, J. Singhai, and E. A. Edirisinghe, “A survey of cast shadow detection algorithms,” Pattern Recognition Letters, vol. 33, no. 6, pp. 752–764, 2012.
  32. A. Sanin, C. Sanderson, and B. C. Lovell, “Shadow detection: A survey and comparative evaluation of recent methods,” Pattern Recognit., vol. 45, no. 4, pp. 1684–1695, 2012.
  33. A. Shahtahmassebi, N. Yang, K. Wang, N. Moore, and Z. Shen, “Review of shadow detection and de-shadowing methods in remote sensing,” Chinese Geographical Science, vol. 23, pp. 403–420, 2013.
  34. R. K. Sasi and V. Govindan, “Shadow detection and removal from real images: state of art,” in Int. Symp. Women Comput. Inform., 2015, pp. 309–317.
  35. R. Mahajan and A. Bajpayee, “A survey on shadow detection and removal based on single light source,” in ISCO, 2015, pp. 1–5.
  36. A. Tiwari, P. K. Singh, and S. Amin, “A survey on shadow detection and removal in images and video sequences,” in Int. Conf. Cloud Syst. Big Data Eng. (Confluence), 2016, pp. 518–523.
  37. Y. Mostafa, “A review on various shadow detection and compensation techniques in remote sensing images,” Can. J. Remote Sens., vol. 43, no. 6, pp. 545–562, 2017.
  38. S. Murali, V. Govindan, and S. Kalady, “A survey on shadow detection techniques in a single image,” Information Technol. Control, vol. 47, no. 1, pp. 75–92, 2018.
  39. X. Dong, J. Cao, and W. Zhao, “A review of research on remote sensing images shadow detection and application to building extraction,” Eur. J. Remote Sens., vol. 57, no. 1, 2024.
  40. B. Lei, W. Wan, Q. Bu, and S. Sholtanyuk, “Shadow detection and segmentation on satellite images: a survey,” in Pattern Recogn. Inf. Process., 2023, pp. 245–252.
  41. L. Guo, C. Wang, Y. Wang, S. Huang, W. Yang, A. C. Kot, and B. Wen, “Single-image shadow removal using deep learning: A comprehensive survey,” arXiv preprint arXiv:2407.08865, 2024.
  42. S. H. Khan, M. Bennamoun, F. Sohel, and R. Togneri, “Automatic shadow detection and removal from a single image,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 3, pp. 431–446, 2016.
  43. L. Shen, T. Wee Chua, and K. Leman, “Shadow optimization from structured deep edge detection,” in CVPR, 2015, pp. 2067–2074.
  44. T. F. Y. Vicente, L. Hou, C.-P. Yu, M. Hoai, and D. Samaras, “Large-scale training of shadow detectors with noisily-annotated shadow examples,” in ECCV, 2016, pp. 816–832.
  45. L. Hou, T. F. Y. Vicente, M. Hoai, and D. Samaras, “Large scale shadow annotation and detection using lazy annotation and stacked CNNs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 4, pp. 1337–1351, 2021.
  46. V. Nguyen, T. F. Y. Vicente, M. Zhao, M. Hoai, and D. Samaras, “Shadow detection with conditional generative adversarial networks,” in ICCV, 2017, pp. 4510–4518.
  47. S. Hosseinzadeh, M. Shakeri, and H. Zhang, “Fast shadow detection from a single image using a patched convolutional neural network,” in IROS, 2018, pp. 3124–3129.
  48. J. Wang, X. Li, and J. Yang, “Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal,” in CVPR, 2018, pp. 1788–1797.
  49. X. Hu, L. Zhu, C.-W. Fu, J. Qin, and P.-A. Heng, “Direction-aware spatial context features for shadow detection,” in CVPR, 2018, pp. 7454–7462.
  50. X. Hu, C.-W. Fu, L. Zhu, J. Qin, and P.-A. Heng, “Direction-aware spatial context features for shadow detection and removal,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 11, pp. 2795–2808, 2020.
  51. Y. Wang, X. Zhao, Y. Li, X. Hu, and K. Huang, “Densely cascaded shadow detection network via deeply supervised parallel fusion.” in IJCAI, 2018, pp. 1007–1013.
  52. S. Mohajerani and P. Saeedi, “CPNet: A context preserver convolutional neural network for detecting shadows in single RGB images,” in MMSP (Workshop), 2018, pp. 1–5.
  53. H. Le, T. F. Y. Vicente, V. Nguyen, M. Hoai, and D. Samaras, “A+D Net: Training a shadow detector with adversarial shadow attenuation,” in ECCV, 2018, pp. 662–678.
  54. L. Zhu, Z. Deng, X. Hu, C.-W. Fu, X. Xu, J. Qin, and P.-A. Heng, “Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection,” in ECCV, 2018, pp. 121–136.
  55. Q. Zheng, X. Qiao, Y. Cao, and R. W. Lau, “Distraction-aware shadow detection,” in CVPR, 2019, pp. 5167–5176.
  56. B. Ding, C. Long, L. Zhang, and C. Xiao, “ARGAN: Attentive recurrent generative adversarial network for shadow detection and removal,” in ICCV, 2019, pp. 10 213–10 222.
  57. S. Mohajerani and P. Saeedi, “Shadow detection in single RGB images using a context preserver convolutional neural network trained by multiple adversarial examples,” IEEE Trans. Image Process., vol. 28, no. 8, pp. 4117–4129, 2019.
  58. Z. Chen, L. Zhu, L. Wan, S. Wang, W. Feng, and P.-A. Heng, “A multi-task mean teacher for semi-supervised shadow detection,” in CVPR, 2020, pp. 5611–5620.
  59. S. Luo, H. Li, and H. Shen, “Deeply supervised convolutional neural network for shadow detection based on a novel aerial shadow imagery dataset,” J. Photogramm. Remote Sens., vol. 167, pp. 443–457, 2020.
  60. L. Zhu, K. Xu, Z. Ke, and R. W. Lau, “Mitigating intensity bias in shadow detection via feature decomposition and reweighting,” in ICCV, 2021, pp. 4702–4711.
  61. X. Hu, T. Wang, C.-W. Fu, Y. Jiang, Q. Wang, and P.-A. Heng, “Revisiting shadow detection: A new benchmark dataset for complex world,” IEEE Trans. Image Process., vol. 30, pp. 1925–1934, 2021.
  62. X. Fang, X. He, L. Wang, and J. Shen, “Robust shadow detection by exploring effective shadow contexts,” in ACMMM, 2021, pp. 2927–2935.
  63. J. Liao, Y. Liu, G. Xing, H. Wei, J. Chen, and S. Xu, “Shadow detection via predicting the confidence maps of shadow detection methods,” in ACMMM, 2021, pp. 704–712.
  64. Y. Zhu, X. Fu, C. Cao, X. Wang, Q. Sun, and Z.-J. Zha, “Single image shadow detection via complementary mechanism,” in ACMMM, 2022, pp. 6717–6726.
  65. L. Jie and H. Zhang, “A fast and efficient network for single image shadow detection,” in ICASSP, 2022, pp. 2634–2638.
  66. H. Yang, T. Wang, X. Hu, and C.-W. Fu, “SILT: Shadow-aware iterative label tuning for learning to detect shadows from noisy labels,” in ICCV, 2023, pp. 12 687–12 698.
  67. J. Sun, K. Xu, Y. Pang, L. Zhang, H. Lu, G. Hancke, and R. W. Lau, “Adaptive illumination mapping for shadow detection in raw images,” in ICCV, 2023, pp. 12 709–12 718.
  68. J. M. J. Valanarasu and V. M. Patel, “Fine-context shadow detection using shadow removal,” in WACV, 2023, pp. 1705–1714.
  69. M. K. Yücel, V. Dimaridou, B. Manganelli, M. Ozay, A. Drosou, and A. Saa-Garriga, “LRA&LDRA: Rethinking residual predictions for efficient shadow detection and removal,” in WACV, 2023, pp. 4925–4935.
  70. R. Cong, Y. Guan, J. Chen, W. Zhang, Y. Zhao, and S. Kwong, “SDDNet: Style-guided dual-layer disentanglement network for shadow detection,” in ACMMM, 2023, pp. 1202–1211.
  71. W. Wu, W. Yang, W. Ma, and X.-D. Chen, “How many annotations do we need for generalizing new-coming shadow images?” IEEE Trans. Circuits Syst. Video Technol., 2023.
  72. T. Chen, L. Zhu, C. Deng, R. Cao, Y. Wang, S. Zhang, Z. Li, L. Sun, Y. Zang, and P. Mao, “SAM-adapter: Adapting segment anything in underperformed scenes,” in ICCV Workshops, 2023, pp. 3367–3375.
  73. X.-D. Chen, W. Wu, W. Yang, H. Qin, X. Wu, and X. Mao, “Make segment anything model perfect on shadow detection,” IEEE Trans. Geosci. Remote Sens., 2023.
  74. L. Jie and H. Zhang, “AdapterShadow: Adapting segment anything model for shadow detection,” arXiv preprint arXiv:2311.08891, 2023.
  75. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
  76. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
  77. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015, pp. 234–241.
  78. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in CVPR, 2018, pp. 4510–4520.
  79. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NeurIPS, 2014.
  80. L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in CVPR, 2016, pp. 2414–2423.
  81. A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in NeurIPS, vol. 30, 2017.
  82. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “SegFormer: Simple and efficient design for semantic segmentation with transformers,” in NeurIPS, 2021, pp. 12 077–12 090.
  83. B. Zhou, Q. Cui, X.-S. Wei, and Z.-M. Chen, “BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition,” in CVPR, 2020, pp. 9719–9728.
  84. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in CVPR, 2017, pp. 1492–1500.
  85. M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in ICML, 2019, pp. 6105–6114.
  86. W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “PVT v2: Improved baselines with pyramid vision transformer,” Comput. Vis. Media, vol. 8, no. 3, pp. 415–424, 2022.
  87. Z. Chen, L. Wan, L. Zhu, J. Shen, H. Fu, W. Liu, and J. Qin, “Triple-cooperative video shadow detection,” in CVPR, 2021, pp. 2715–2724.
  88. S. Hu, H. Le, and D. Samaras, “Temporal feature warping for video shadow detection,” arXiv preprint arXiv:2107.14287, 2021.
  89. X. Lu, Y. Cao, S. Liu, C. Long, Z. Chen, X. Zhou, Y. Yang, and C. Xiao, “Video shadow detection via spatio-temporal interpolation consistency training,” in CVPR, 2022, pp. 3116–3125.
  90. X. Ding, J. Yang, X. Hu, and X. Li, “Learning shadow correspondence for video shadow detection,” in ECCV, 2022, pp. 705–722.
  91. J. Lin and L. Wang, “Spatial-temporal fusion network for fast video shadow detection,” in ACM SIGGRAPH VRCAI, 2022, pp. 1–5.
  92. L. Liu, J. Prost, L. Zhu, N. Papadakis, P. Liò, C.-B. Schönlieb, and A. I. Aviles-Rivero, “Scotch and soda: A transformer video shadow detection framework,” in CVPR, 2023, pp. 10 449–10 458.
  93. Y. Wang, W. Zhou, Y. Mao, and H. Li, “Detect any shadow: Segment anything for video shadow detection,” IEEE Trans. Circuits Syst. Video Technol., 2023.
  94. H. Wang, W. Wang, H. Zhou, H. Xu, S. Wu, and L. Zhu, “Language-driven interactive shadow detection,” in ACMMM, 2024.
  95. H. Zhou, H. Wang, T. Ye, Z. Xing, J. Ma, P. Li, Q. Wang, and L. Zhu, “Timeline and boundary guided diffusion network for video shadow detection,” in ACMMM, 2024.
  96. X. Duan, Y. Cao, L. Zhu, G. Fu, X. Wang, R. Zhang, and P. Li, “Two-stage video shadow detection via temporal-spatial adaption,” in ECCV, 2024.
  97. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  98. L. Jie and H. Zhang, “When SAM meets shadow detection,” arXiv preprint arXiv:2305.11513, 2023.
  99. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2020.
  100. S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. Torr, “Res2Net: A new multi-scale backbone architecture,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 2, pp. 652–662, 2019.
  101. L. Zhang, A. Rao, and M. Agrawala, “Adding conditional control to text-to-image diffusion models,” in ICCV, 2023, pp. 3836–3847.
  102. T. F. Y. Vicente, M. Hoai, and D. Samaras, “Noisy label recovery for shadow detection in unfamiliar domains,” in CVPR, 2016, pp. 3783–3792.
  103. N. Inoue and T. Yamasaki, “Learning from synthetic shadows for shadow detection and removal,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 11, pp. 4187–4197, 2021.
  104. X. Hu, Y. Jiang, C.-W. Fu, and P.-A. Heng, “Mask-ShadowGAN: Learning to remove shadows from unpaired data,” in ICCV, 2019, pp. 2472–2481.
  105. R. Margolin, L. Zelnik-Manor, and A. Tal, “How to evaluate foreground maps?” in CVPR, 2014, pp. 248–255.
  106. L. Liu, J. Zhang, R. He, Y. Liu, Y. Wang, Y. Tai, D. Luo, C. Wang, J. Li, and F. Huang, “Learning by analogy: Reliable supervision from transformations for unsupervised optical flow estimation,” in CVPR, 2020, pp. 6489–6498.
  107. Z. Teed and J. Deng, “RAFT: Recurrent all-pairs field transforms for optical flow,” in ECCV, 2020, pp. 402–419.
  108. T. Wang∗, X. Hu∗, Q. Wang, P.-A. Heng, and C.-W. Fu, “Instance shadow detection,” in CVPR, 2020, pp. 1880–1889, ∗Joint first authors.
  109. T. Wang∗, X. Hu∗, C.-W. Fu, and P.-A. Heng, “Single-stage instance shadow detection with bidirectional relation learning,” in CVPR, 2021, pp. 1–11, ∗Joint first authors, oral presentation.
  110. T. Wang, X. Hu, P.-A. Heng, and C.-W. Fu, “Instance shadow detection with a single-stage detector,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3259–3273, 2023.
  111. Z. Xing, T. Wang, X. Hu, H. Wu, C.-W. Fu, and P.-A. Heng, “Video instance shadow detection,” arXiv preprint arXiv:2211.12827, 2022.
  112. L. Qu, J. Tian, S. He, Y. Tang, and R. W. Lau, “DeshadowNet: A multi-context embedding deep network for shadow removal,” in CVPR, 2017, pp. 4067–4075.
  113. O. Sidorov, “Conditional GANs for multi-illuminant color constancy: Revolution or yet another approach?” in CVPR Workshops, 2019.
  114. H. Le and D. Samaras, “Shadow removal via shadow image decomposition,” in ICCV, 2019, pp. 8578–8587.
  115. L. Zhang, C. Long, X. Zhang, and C. Xiao, “RIS-GAN: Explore residual and illumination with generative adversarial networks for shadow removal,” in AAAI, 2020, pp. 12 829–12 836.
  116. X. Cun, C.-M. Pun, and C. Shi, “Towards ghost-free shadow removal via dual hierarchical aggregation network and shadow matting GAN,” in AAAI, vol. 34, no. 07, 2020, pp. 10 680–10 687.
  117. H. Le and D. Samaras, “From shadow segmentation to shadow removal,” in ECCV, 2020, pp. 264–281.
  118. ——, “Physics-based shadow image decomposition for shadow removal,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9088–9101, 2021.
  119. L. Fu, C. Zhou, Q. Guo, F. J. Xu, H. Yu, W. Feng, Y. Liu, and S. Wang, “Auto-exposure fusion for single-image shadow removal,” in CVPR, 2021, pp. 10 571–10 580.
  120. Z. Liu, H. Yin, X. Wu, Z. Wu, Y. Mi, and S. Wang, “From shadow generation to shadow removal,” in CVPR, 2021, pp. 4927–4936.
  121. F.-A. Vasluianu, A. Romero, L. Van Gool, and R. Timofte, “Shadow removal with paired and unpaired learning,” in CVPR Workshops, 2021, pp. 826–835.
  122. Z. Chen, C. Long, L. Zhang, and C. Xiao, “CANet: A context-aware network for shadow removal,” in ICCV, 2021, pp. 4743–4752.
  123. Y. Jin, A. Sharma, and R. T. Tan, “DC-ShadowNet: Single-image hard and soft shadow removal using unsupervised domain-classifier guided network,” in ICCV, 2021, pp. 5027–5036.
  124. Z. Liu, H. Yin, Y. Mi, M. Pu, and S. Wang, “Shadow removal by a lightness-guided network with training on unpaired data,” IEEE Trans. Image Process., vol. 30, pp. 1853–1865, 2021.
  125. Y. Zhu, Z. Xiao, Y. Fang, X. Fu, Z. Xiong, and Z.-J. Zha, “Efficient model-driven network for shadow removal,” in AAAI, vol. 36, no. 3, 2022, pp. 3635–3643.
  126. J. Wan, H. Yin, Z. Wu, X. Wu, Z. Liu, and S. Wang, “Crformer: A cross-region transformer for shadow removal,” arXiv preprint arXiv:2207.01600, 2022.
  127. Y. Zhu, J. Huang, X. Fu, F. Zhao, Q. Sun, and Z.-J. Zha, “Bijective mapping network for shadow removal,” in CVPR, 2022, pp. 5627–5636.
  128. J. Gao, Q. Zheng, and Y. Guo, “Towards real-world shadow removal with a shadow simulation method and a two-stage framework,” in CVPR Workshops, 2022, pp. 599–608.
  129. J. Wan, H. Yin, Z. Wu, X. Wu, Y. Liu, and S. Wang, “Style-guided shadow removal,” in ECCV, 2022, pp. 361–378.
  130. Q. Yu, N. Zheng, J. Huang, and F. Zhao, “CNSNet: A cleanness-navigated-shadow network for shadow removal,” in ECCV Workshops, 2022, pp. 221–238.
  131. Y. Jin, W. Yang, W. Ye, Y. Yuan, and R. T. Tan, “Shadowdiffusion: Diffusion-based shadow removal using classifier-driven attention and structure preservation,” arXiv preprint arXiv:2211.08089, vol. 2, 2022.
  132. L. Guo, C. Wang, W. Yang, S. Huang, Y. Wang, H. Pfister, and B. Wen, “Shadowdiffusion: When degradation prior meets diffusion model for shadow removal,” in CVPR, 2023, pp. 14 049–14 058.
  133. Y. Liu, Q. Guo, L. Fu, Z. Ke, K. Xu, W. Feng, I. W. Tsang, and R. W. Lau, “Structure-informed shadow removal networks,” IEEE Trans. Image Process., 2023.
  134. H. Jiang, Q. Zhang, Y. Nie, L. Zhu, and W.-S. Zheng, “Learning to remove shadows from a single image,” Int. J. Comput. Vis., vol. 131, no. 9, pp. 2471–2488, 2023.
  135. L. Guo, S. Huang, D. Liu, H. Cheng, and B. Wen, “ShadowFormer: global context helps shadow removal,” in AAAI, vol. 37, no. 1, 2023, pp. 710–718.
  136. X. Zhang, Y. Zhao, C. Gu, C. Lu, and S. Zhu, “SpA-Former:an effective and lightweight transformer for image shadow removal,” in IJCNN, 2023, pp. 1–8.
  137. F.-A. Vasluianu, T. Seizinger, and R. Timofte, “WSRD: A novel benchmark for high resolution image shadow removal,” in CVPR Workshops, 2023, pp. 1826–1835.
  138. H.-E. Chang, C.-H. Hsieh, H.-H. Yang, I.-H. Chen, Y.-C. Chen, Y.-C. Chiang, Z.-K. Huang, W.-T. Chen, and S.-Y. Kuo, “TSRFormer: Transformer based two-stage refinement for single image shadow removal,” in CVPR Workshops, 2023, pp. 1436–1446.
  139. S. Cui, J. Huang, S. Tian, M. Fan, J. Zhang, L. Zhu, X. Wei, and X. Wei, “Pyramid ensemble structure for high resolution image shadow removal,” in CVPR Workshops, 2023, pp. 1311–1319.
  140. L. Guo, C. Wang, W. Yang, Y. Wang, and B. Wen, “Boundary-aware divide and conquer: A diffusion-based solution for unsupervised shadow removal,” in ICCV, 2023, pp. 13 045–13 054.
  141. X. Li, Q. Guo, R. Abdelfattah, D. Lin, W. Feng, I. Tsang, and S. Wang, “Leveraging inpainting for single-image shadow removal,” in ICCV, 2023, pp. 13 055–13 064.
  142. M. Sen, S. P. Chermala, N. N. Nagori, V. Peddigari, P. Mathur, B. Prasad, and M. Jeong, “SHARDS: Efficient shadow removal using dual stage network for high-resolution images,” in WACV, 2023, pp. 1809–1817.
  143. J. Liu, Q. Wang, H. Fan, J. Tian, and Y. Tang, “A shadow imaging bilinear model and three-branch residual network for shadow removal,” IEEE Trans. Neural Netw. Learn. Syst., 2023, early access.
  144. Y. Wang, W. Zhou, H. Feng, L. Li, and H. Li, “Progressive recurrent network for shadow removal,” Comput. Vis. Image Underst., vol. 238, p. 103861, 2024.
  145. Y. Jin, W. Yang, W. Ye, Y. Yuan, and R. T. Tan, “DeS3: Adaptive attention-driven self and soft shadow removal using ViT similarity,” in AAAI, 2024.
  146. Y. Liu, Z. Ke, K. Xu, F. Liu, Z. Wang, and R. W. Lau, “Recasting regional lighting for shadow removal,” in AAAI, 2024.
  147. K. Mei, L. Figueroa, Z. Lin, Z. Ding, S. Cohen, and V. M. Patel, “Latent feature-guided diffusion models for shadow removal,” in WACV, 2024, pp. 4313–4322.
  148. Z. Li, G. Xie, G. Jiang, and Z. Lu, “ShadowMaskFormer: Mask augmented patch embeddings for shadow removal,” arXiv preprint arXiv:2404.18433, 2024.
  149. W. Dong, H. Zhou, Y. Tian, J. Sun, X. Liu, G. Zhai, and J. Chen, “ShadowRefiner: Towards mask-free shadow removal via fast Fourier transformer,” arXiv preprint arXiv:2406.02559, 2024.
  150. J. Xiao, X. Fu, Y. Zhu, D. Li, J. Huang, K. Zhu, and Z.-J. Zha, “HomoFormer: Homogenized transformer for image shadow removal,” in CVPR, 2024, pp. 25 617–25 626.
  151. Z. Zeng, C. Zhao, W. Cai, and C. Dong, “Semantic-guided adversarial diffusion model for self-supervised shadow removal,” arXiv preprint arXiv:2407.01104, 2024.
  152. J. Luo, R. Li, C. Jiang, M. Han, X. Zhang, T. Jiang, H. Fan, and S. Liu, “Diff-Shadow: Global-guided diffusion model for shadow removal,” arXiv:2407.16214, 2024.
  153. X. Chu, L. Chen, and W. Yu, “NAFSSR: Stereo image super-resolution using NAFNet,” in CVPR Workshops, 2022, pp. 1239–1248.
  154. M. Wortsman, G. Ilharco, S. Y. Gadre, R. Roelofs, R. Gontijo-Lopes, A. S. Morcos, H. Namkoong, A. Farhadi, Y. Carmon, S. Kornblith et al., “Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time,” in ICML, 2022, pp. 23 965–23 998.
  155. K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder–decoder for statistical machine translation,” in EMNLP, 2014, pp. 1724–1734.
  156. X. Mao, Y. Liu, W. Shen, Q. Li, and Y. Wang, “Deep residual Fourier transformation for single image deblurring,” arXiv preprint arXiv:2111.11745, vol. 2, no. 3, p. 5, 2021.
  157. X. Hu, M. Shi, W. Wang, S. Wu, L. Xing, W. Wang, X. Zhu, L. Lu, J. Zhou, X. Wang et al., “Demystify transformers & convolutions in modern image deep networks,” arXiv preprint arXiv:2211.05781, 2022.
  158. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020.
  159. Y.-H. Lin, W.-C. Chen, and Y.-Y. Chuang, “BEDSR-Net: A deep shadow removal network from a single document image,” in CVPR, 2020, pp. 12 905–12 914.
  160. L. Zhang, Y. He, Q. Zhang, Z. Liu, X. Zhang, and C. Xiao, “Document image shadow removal guided by color-aware background,” in CVPR, 2023, pp. 1818–1827.
  161. Z. Li, X. Chen, C.-M. Pun, and X. Cun, “High-resolution document shadow removal via a large-scale real-world dataset and a frequency-aware shadow erasing net,” in ICCV, 2023, pp. 12 415–12 424.
  162. X. C. Zhang, J. T. Barron, Y. Tsai, R. Pandey, X. Zhang, R. Ng, and D. E. Jacobs, “Portrait shadow manipulation,” ACM Trans. Graph. (SIGGRAPH), vol. 39, no. 4, p. 78, 2020.
  163. Y. He, Y. Xing, T. Zhang, and Q. Chen, “Unsupervised portrait shadow removal via generative priors,” in ACMMM, 2021, pp. 236–244.
  164. Y. Liu, X. Huang, L. Ren, and X. Liu, “Blind removal of facial foreign shadows.” in BMVC, 2022.
  165. J. Lyu, Z. Wang, and F. Xu, “Portrait eyeglasses and shadow removal by leveraging 3d synthetic data,” in CVPR, 2022, pp. 3429–3439.
  166. L. Zhang, B. Chen, Z. Liu, and C. Xiao, “Facial image shadow removal via graph-based feature fusion,” Comp. Graph. Forum, vol. 42, no. 7, pp. 1–11, 2023.
  167. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in ICML, 2021, pp. 8748–8763.
  168. Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general U-shaped transformer for image restoration,” in CVPR, 2022, pp. 17 683–17 693.
  169. A. Hou, Z. Zhang, M. Sarkis, N. Bi, Y. Tong, and X. Liu, “Towards high fidelity face relighting with realistic shadows,” in CVPR, 2021, pp. 14 719–14 728.
  170. L. Fu, Q. Guo, F. Juefei-Xu, H. Yu, W. Feng, Y. Liu, and S. Wang, “Benchmarking shadow removal for facial landmark detection and beyond,” arXiv preprint arXiv:2111.13790, 2021.
  171. D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf, “Residual conv-deconv grid network for semantic segmentation,” in BMVC, 2017.
  172. S. Niklaus and F. Liu, “Context-aware synthesis for video frame interpolation,” in CVPR, 2018, pp. 1701–1710.
  173. Z. Chen, L. Wan, Y. Xiao, L. Zhu, and H. Fu, “Learning physical-spatio-temporal features for video shadow removal,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 7, pp. 5830–5842, 2024.
  174. F.-A. Vasluianu, T. Seizinger, R. Timofte, and et al., “NTIRE 2023 image shadow removal challenge report,” in CVPR Workshops, 2023, pp. 1788–1807.
  175. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in SIGGRAPH, 2000, pp. 145–156.
  176. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.
  177. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in ICLR, 2015.
  178. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  179. S. Zhang, R. Liang, and M. Wang, “ShadowGAN: Shadow synthesis for virtual objects with conditional adversarial networks,” Comput. Vis. Media, vol. 5, pp. 105–115, 2019.
  180. D. Liu, C. Long, H. Zhang, H. Yu, X. Dong, and C. Xiao, “ARShadowGAN: Shadow generative adversarial network for augmented reality in single light scenes,” in CVPR, 2020, pp. 8139–8148.
  181. Q. Zheng, Z. Li, and A. Bargteil, “Learning to shadow hand-drawn sketches,” in CVPR, 2020, pp. 7436–7445.
  182. Y. Sheng, J. Zhang, and B. Benes, “SSN: Soft shadow network for image compositing,” in CVPR, 2021, pp. 4380–4390.
  183. L. Zhang, J. Jiang, Y. Ji, and C. Liu, “SmartShadow: Artistic shadow drawing tool for line drawings,” in CVPR, 2021, pp. 5391–5400.
  184. Y. Sheng, Y. Liu, J. Zhang, W. Yin, A. C. Oztireli, H. Zhang, Z. Lin, E. Shechtman, and B. Benes, “Controllable shadow generation using pixel height maps,” in ECCV, 2022, pp. 240–256.
  185. Y. Hong, L. Niu, and J. Zhang, “Shadow generation for composite image in real-world scenes,” in AAAI, 2022, pp. 914–922.
  186. T. Liu, Y. Li, and Y. Ding, “Shadow generation for composite image with multi-level feature fusion,” in EITCE, 2022, pp. 1396–1400.
  187. Y. Sheng, J. Zhang, J. Philip, Y. Hold-Geoffroy, X. Sun, H. Zhang, L. Ling, and B. Benes, “PixHt-Lab: Pixel height based light effect generation for image compositing,” in CVPR, 2023, pp. 16 643–16 653.
  188. Q. Meng, S. Zhang, Z. Li, C. Wang, W. Zhang, and Q. Huang, “Automatic shadow generation via exposure fusion,” IEEE Trans. Multimedia, pp. 9044–9056, 2023.
  189. L. Valença, J. Zhang, M. Gharbi, Y. Hold-Geoffroy, and J.-F. Lalonde, “Shadow harmonization for realistic compositing,” in SIGGRAPH, 2023, pp. 1–12.
  190. X. Tao, J. Cao, Y. Hong, and L. Niu, “Shadow generation with decomposed mask prediction and attentive shadow filling,” in AAAI, 2024, pp. 5198–5206.
  191. Q. Liu, J. You, J. Wang, X. Tao, B. Zhang, and L. Niu, “Shadow generation for composite image using diffusion model,” in CVPR, 2024.
  192. N. Ravi, V. Gabeur, Y.-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. Rädle, C. Rolland, L. Gustafson et al., “SAM 2: Segment anything in images and videos,” arXiv preprint arXiv:2408.00714, 2024.
  193. L. Yang, B. Kang, Z. Huang, X. Xu, J. Feng, and H. Zhao, “Depth anything: Unleashing the power of large-scale unlabeled data,” in CVPR, 2024, pp. 10 371–10 381.
  194. L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, and H. Zhao, “Depth anything v2,” arXiv:2406.09414, 2024.
  195. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “GPT-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
  196. H. Yu, R. Li, S. Xie, and J. Qiu, “Shadow-enlightened image outpainting,” in CVPR, 2024, pp. 7850–7860.
  197. A. Sarkar, H. Mai, A. Mahapatra, S. Lazebnik, and A. Bhattad, “Shadows don’t lie and lines can’t bend! generative models don’t know projective geometry… for now,” in CVPR, 2024, pp. 28 140–28 149.
  198. Y. Zhong, X. Liu, D. Zhai, J. Jiang, and X. Ji, “Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon,” in CVPR, 2022, pp. 15 345–15 354.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.