Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Advances in Deep Concealed Scene Understanding (2304.11234v2)

Published 21 Apr 2023 in cs.CV

Abstract: Concealed scene understanding (CSU) is a hot computer vision topic aiming to perceive objects exhibiting camouflage. The current boom in terms of techniques and applications warrants an up-to-date survey. This can help researchers to better understand the global CSU field, including both current achievements and remaining challenges. This paper makes four contributions: (1) For the first time, we present a comprehensive survey of deep learning techniques aimed at CSU, including a taxonomy, task-specific challenges, and ongoing developments. (2) To allow for an authoritative quantification of the state-of-the-art, we offer the largest and latest benchmark for concealed object segmentation (COS). (3) To evaluate the generalizability of deep CSU in practical scenarios, we collect the largest concealed defect segmentation dataset termed CDS2K with the hard cases from diversified industrial scenarios, on which we construct a comprehensive benchmark. (4) We discuss open problems and potential research directions for CSU. Our code and datasets are available at https://github.com/DengPingFan/CSU, which will be updated continuously to watch and summarize the advancements in this rapidly evolving field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (138)
  1. D.-P. Fan, J. Zhang, G. Xu, M.-M. Cheng, and L. Shao, “Salient objects in clutter,” IEEE TPAMI, vol. 45, no. 2, pp. 2344–2366, 2023.
  2. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in CVPR, 2017.
  3. G.-P. Ji, G. Xiao, Y.-C. Chou, D.-P. Fan, K. Zhao, G. Chen, and L. Van Gool, “Video polyp segmentation: A deep learning perspective,” MIR, vol. 19, no. 6, p. 531–549, 2022.
  4. D.-P. Fan, T. Zhou, G.-P. Ji, Y. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, “Inf-net: Automatic covid-19 lung infection segmentation from ct images,” IEEE TMI, vol. 39, no. 8, pp. 2626–2637, 2020.
  5. L. Liu, R. Wang, C. Xie, P. Yang, F. Wang, S. Sudirman, and W. Liu, “Pestnet: An end-to-end deep learning approach for large-scale multi-class pest detection and classification,” IEEE Access, vol. 7, pp. 45 301–45 312, 2019.
  6. M. Rizzo, M. Marcuzzo, A. Zangari, A. Gasparetto, and A. Albarelli, “Fruit ripeness classification: A survey,” AIA, vol. 7, pp. 44–57, 2023.
  7. H.-K. Chu, W.-H. Hsu, N. J. Mitra, D. Cohen-Or, T.-T. Wong, and T.-Y. Lee, “Camouflage images,” ACM TOG, vol. 29, no. 4, pp. 51–1, 2010.
  8. T. E. Boult, R. J. Micheals, X. Gao, and M. Eckmann, “Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings,” Proceedings of the IEEE, vol. 89, no. 10, pp. 1382–1402, 2001.
  9. D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “An algorithm for detection of partially camouflaged people,” in IEEE AVSS, 2009.
  10. J. Y. Y. H. W. Hou and J. Li, “Detection of the mobile object with camouflage color under dynamic background based on optical flow,” Procedia Engineering, vol. 15, pp. 2201–2205, 2011.
  11. S. Kim, “Unsupervised spectral-spatial feature selection-based camouflaged object detection using vnir hyperspectral camera,” TSWJ, vol. 2015, 2015.
  12. X. Zhang, C. Zhu, S. Wang, Y. Liu, and M. Ye, “A bayesian approach to camouflaged moving object detection,” IEEE TCSVT, vol. 27, no. 9, pp. 2001–2013, 2016.
  13. M. Galun, E. Sharon, R. Basri, and A. Brandt, “Texture segmentation by multiscale aggregation of filter responses and shape elements.” in ICCV, 2003.
  14. A. Tankus and Y. Yeshurun, “Detection of regions of interest and camouflage breaking by direct convexity estimation,” in IEEE WVS, 1998.
  15. ——, “Convexity-based visual camouflage breaking,” CVIU, vol. 82, no. 3, pp. 208–237, 2001.
  16. A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” in CVPR, 2004.
  17. Z. Liu, K. Huang, and T. Tan, “Foreground object detection using top-down information based on em framework,” IEEE TIP, vol. 21, no. 9, pp. 4204–4217, 2012.
  18. S. Li, D. Florencio, Y. Zhao, C. Cook, and W. Li, “Foreground detection in camouflaged scenes,” in ICIP, 2017.
  19. D.-P. Fan, G.-P. Ji, G. Sun, M.-M. Cheng, J. Shen, and L. Shao, “Camouflaged object detection,” in CVPR, 2020.
  20. T.-N. Le, T. V. Nguyen, Z. Nie, M.-T. Tran, and A. Sugimoto, “Anabranch network for camouflaged object segmentation,” CVIU, vol. 184, pp. 45–56, 2019.
  21. Q. Zhang, G. Yin, Y. Nie, and W.-S. Zheng, “Deep camouflage images,” in AAAI, 2020.
  22. D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao, “Concealed object detection,” IEEE TPAMI, vol. 44, no. 10, pp. 6024–6042, 2022.
  23. Y. Lv, J. Zhang, Y. Dai, A. Li, B. Liu, N. Barnes, and D.-P. Fan, “Simultaneously localize, segment and rank the camouflaged objects,” in CVPR, 2021.
  24. H. Mei, G.-P. Ji, Z. Wei, X. Yang, X. Wei, and D.-P. Fan, “Camouflaged object segmentation with distraction mining,” in CVPR, 2021.
  25. H. Mei, X. Yang, Y. Zhou, G.-P. Ji, X. Wei, and D.-P. Fan, “Distraction-aware camouflaged object segmentation,” SCIS, 2023.
  26. L. Yu, H. Mei, W. Dong, Z. Wei, L. Zhu, Y. Wang, and X. Yang, “Progressive glass segmentation,” IEEE TIP, vol. 31, pp. 2920–2933, 2022.
  27. G.-P. Ji, D.-P. Fan, Y.-C. Chou, D. Dai, A. Liniger, and L. Van Gool, “Deep gradient learning for efficient camouflaged object detection,” MIR, vol. 20, pp. 92–108, 2023.
  28. J. S. Kulchandani and K. J. Dangarwala, “Moving object detection: Review of recent research trends,” in IEEE ICPC, 2015.
  29. A. Mondal, “Camouflaged object detection and tracking: A survey,” IJIG, vol. 20, no. 04, p. 2050028, 2020.
  30. H. Bi, C. Zhang, K. Wang, J. Tong, and F. Zheng, “Rethinking camouflaged object detection: Models and datasets,” IEEE TCSVT, vol. 32, no. 9, pp. 5708–5724, 2022.
  31. S. Caijuan, R. Bijuan, W. Ziwen, Y. Jinwei, and S. Ze, “Survey of camouflaged object detection based on deep learning,” IFCST, vol. 16, no. 12, p. 2734, 2022.
  32. Y. Lv, J. Zhang, Y. Dai, A. Li, N. Barnes, and D.-P. Fan, “Towards deeper understanding of camouflaged object detection,” IEEE TCSVT, 2023.
  33. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in ICCV, 2017.
  34. J. Pei, T. Cheng, D.-P. Fan, H. Tang, C. Chen, and L. Van Gool, “Osformer: One-stage camouflaged instance segmentation with transformers,” in ECCV, 2022.
  35. T.-N. Le, Y. Cao, T.-C. Nguyen, M.-Q. Le, K.-D. Nguyen, T.-T. Do, M.-T. Tran, and T. V. Nguyen, “Camouflaged instance segmentation in-the-wild: Dataset, method, and benchmark suite,” IEEE TIP, vol. 31, pp. 287–300, 2022.
  36. E. Xie, W. Wang, M. Ding, R. Zhang, and P. Luo, “Polarmask++: Enhanced polar representation for single-shot instance segmentation and beyond,” IEEE TPAMI, vol. 44, no. 9, pp. 5385–5400, 2021.
  37. H. Chen, K. Sun, Z. Tian, C. Shen, Y. Huang, and Y. Yan, “Blendmask: Top-down meets bottom-up for instance segmentation,” in CVPR, 2020.
  38. G. Sun, Z. An, Y. Liu, C. Liu, C. Sakaridis, D.-P. Fan, and L. Van Gool, “Indiscernible object counting in underwater scenes,” in CVPR, 2023.
  39. H. Lamdouar, C. Yang, W. Xie, and A. Zisserman, “Betrayed by motion: Camouflaged object discovery via motion segmentation,” in ACCV, 2020.
  40. L. Jiao, R. Zhang, F. Liu, S. Yang, B. Hou, L. Li, and X. Tang, “New generation deep learning for video object detection: A survey,” IEEE TNNLS, vol. 33, no. 8, pp. 3195–3215, 2022.
  41. X. Cheng, H. Xiong, D.-P. Fan, Y. Zhong, M. Harandi, T. Drummond, and Z. Ge, “Implicit motion handling for video camouflaged object detection,” in CVPR, 2022.
  42. D.-P. Fan, M.-M. Cheng, J.-J. Liu, S.-H. Gao, Q. Hou, and A. Borji, “Salient objects in clutter: Bringing salient object detection to the foreground,” in ECCV, 2018.
  43. S. He, R. W. Lau, W. Liu, Z. Huang, and Q. Yang, “Supercnn: A superpixelwise convolutional neural network for salient object detection,” IJCV, vol. 115, no. 3, pp. 330–344, 2015.
  44. G. Li and Y. Yu, “Visual saliency based on multiscale deep features,” in CVPR, 2015.
  45. L. Wang, H. Lu, X. Ruan, and M.-H. Yang, “Deep networks for saliency detection via local estimation and global search,” in CVPR, 2015.
  46. J. Kim and V. Pavlovic, “A shape-based approach for salient object detection using deep learning,” in ECCV, 2016.
  47. Y. Zeng, P. Zhang, J. Zhang, Z. Lin, and H. Lu, “Towards high-resolution salient object detection,” in ICCV, 2019.
  48. N. Liu and J. Han, “Dhsnet: Deep hierarchical saliency network for salient object detection,” in CVPR, 2016.
  49. Z. Wu, L. Su, and Q. Huang, “Cascaded partial decoder for fast and accurate salient object detection,” in CVPR, 2019.
  50. P. Zhang, D. Wang, H. Lu, H. Wang, and B. Yin, “Learning uncertain convolutional features for accurate saliency detection,” in ICCV, 2017.
  51. Q. Hou, M.-M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. S. Torr, “Deeply supervised salient object detection with short connections,” IEEE TPAMI, vol. 41, no. 4, pp. 815–828, 2019.
  52. M. Zhuge, D.-P. Fan, N. Liu, D. Zhang, D. Xu, and L. Shao, “Salient object detection via integrity learning,” IEEE TPAMI, vol. 45, no. 3, pp. 3738–3752, 2023.
  53. Y. Liu, Q. Zhang, D. Zhang, and J. Han, “Employing deep part-object relationships for salient object detection,” in ICCV, 2019.
  54. Q. Qi, S. Zhao, J. Shen, and K.-M. Lam, “Multi-scale capsule attention-based salient object detection with multi-crossed layer connections,” in ICME, 2019.
  55. N. Liu, N. Zhang, K. Wan, L. Shao, and J. Han, “Visual saliency transformer,” in ICCV, 2021.
  56. G. Li and Y. Yu, “Deep contrast learning for salient object detection,” in CVPR, 2016.
  57. Y. Tang and X. Wu, “Saliency detection via combining region-level and pixel-level predictions with cnns,” in ECCV, 2016.
  58. L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan, “Learning to detect salient objects with image-level supervision,” in CVPR, 2017.
  59. G. Li, Y. Xie, and L. Lin, “Weakly supervised salient object detection using image labels,” in AAAI, 2018.
  60. C. Cao, Y. Huang, Z. Wang, L. Wang, N. Xu, and T. Tan, “Lateral inhibition-inspired convolutional neural network for visual attention and saliency detection,” in AAAI, 2018.
  61. B. Li, Z. Sun, and Y. Guo, “Supervae: Superpixelwise variational autoencoder for salient object detection,” in AAAI, 2019.
  62. Y. Zeng, Y. Zhuge, H. Lu, L. Zhang, M. Qian, and Y. Yu, “Multi-source weak supervision for saliency detection,” in CVPR, 2019.
  63. D. Zhang, J. Han, and Y. Zhang, “Supervision by fusion: Towards unsupervised learning of deep salient object detector,” in ICCV, 2017.
  64. J. Zhang, T. Zhang, Y. Dai, M. Harandi, and R. Hartley, “Deep unsupervised saliency detection: A multiple noisy labeling perspective,” in CVPR, 2018.
  65. G. Shin, S. Albanie, and W. Xie, “Unsupervised salient object detection with spectral cluster voting,” in CVPR, 2022.
  66. S. He, J. Jiao, X. Zhang, G. Han, and R. W. Lau, “Delving into salient object subitizing and detection,” in ICCV, 2017.
  67. M. A. Islam, M. Kalash, and N. D. Bruce, “Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects,” in CVPR, 2018.
  68. W. Wang, J. Shen, X. Dong, and A. Borji, “Salient object detection driven by fixation prediction,” in CVPR, 2018.
  69. S. S. Kruthiventi, V. Gudisa, J. H. Dholakiya, and R. V. Babu, “Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation,” in CVPR, 2016.
  70. Y. Zeng, Y. Zhuge, H. Lu, and L. Zhang, “Joint learning of saliency detection and weakly supervised semantic segmentation,” in ICCV, 2019.
  71. L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan, “Saliency detection with recurrent fully convolutional networks,” in ECCV, 2016.
  72. X. Li, F. Yang, H. Cheng, W. Liu, and D. Shen, “Contour knowledge transfer for salient object detection,” in ECCV, 2018.
  73. W. Wang, S. Zhao, J. Shen, S. C. Hoi, and A. Borji, “Salient object detection with pyramid attention and salient edges,” in CVPR, 2019.
  74. J.-J. Liu, Q. Hou, M.-M. Cheng, J. Feng, and J. Jiang, “A simple pooling-based design for real-time salient object detection,” in CVPR, 2019.
  75. J.-X. Zhao, J.-J. Liu, D.-P. Fan, Y. Cao, J. Yang, and M.-M. Cheng, “Egnet: Edge guidance network for salient object detection,” in ICCV, 2019.
  76. J. Su, J. Li, Y. Zhang, C. Xia, and Y. Tian, “Selectivity or invariance: Boundary-aware salient object detection,” in ICCV, 2019.
  77. L. Zhang, J. Zhang, Z. Lin, H. Lu, and Y. He, “Capsal: Leveraging captioning to boost semantics for salient object detection,” in CVPR, 2019.
  78. G. Li, Y. Xie, L. Lin, and Y. Yu, “Instance-level salient object segmentation,” in CVPR, 2017.
  79. X. Tian, K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to detect instance-level salient objects using complementary image labels,” IJCV, vol. 130, no. 3, pp. 729–746, 2022.
  80. R. Fan, M.-M. Cheng, Q. Hou, T.-J. Mu, J. Wang, and S.-M. Hu, “S4net: Single stage salient-instance segmentation,” in CVPR, 2019.
  81. Y.-H. Wu, Y. Liu, L. Zhang, W. Gao, and M.-M. Cheng, “Regularized densely-connected pyramid network for salient instance segmentation,” IEEE TIP, vol. 30, pp. 3897–3907, 2021.
  82. A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” IEEE TPAMI, vol. 35, no. 1, pp. 185–207, 2012.
  83. A. Borji, “Saliency prediction in the deep learning era: Successes and limitations,” IEEE TPAMI, vol. 43, no. 2, pp. 679–700, 2019.
  84. D.-P. Fan, T. Li, Z. Lin, G.-P. Ji, D. Zhang, M.-M. Cheng, H. Fu, and J. Shen, “Re-thinking co-salient object detection,” IEEE TPAMI, vol. 44, no. 8, pp. 4339–4354, 2022.
  85. D.-P. Fan, Z. Lin, G.-P. Ji, D. Zhang, H. Fu, and M.-M. Cheng, “Taking a deeper look at co-salient object detection,” in CVPR, 2020.
  86. D. Zhang, H. Fu, J. Han, A. Borji, and X. Li, “A review of co-saliency detection algorithms: fundamentals, applications, and challenges,” ACM TIST, vol. 9, no. 4, pp. 1–31, 2018.
  87. A. Borji, M.-M. Cheng, Q. Hou, H. Jiang, and J. Li, “Salient object detection: A survey,” CVMJ, vol. 5, no. 2, pp. 117–150, 2019.
  88. W. Wang, Q. Lai, H. Fu, J. Shen, H. Ling, and R. Yang, “Salient object detection in the deep learning era: An in-depth survey,” IEEE TPAMI, vol. 44, no. 6, pp. 3239–3259, 2021.
  89. A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient object detection: A benchmark,” IEEE TIP, vol. 24, no. 12, pp. 5706–5722, 2015.
  90. T. Zhou, D.-P. Fan, M.-M. Cheng, J. Shen, and L. Shao, “Rgb-d salient object detection: A survey,” CVMJ, vol. 7, no. 1, pp. 37–69, 2021.
  91. D.-P. Fan, Z. Lin, Z. Zhang, M. Zhu, and M.-M. Cheng, “Rethinking rgb-d salient object detection: Models, data sets, and large-scale benchmarks,” IEEE TNNLS, vol. 32, no. 5, pp. 2075–2089, 2020.
  92. R. Cong, K. Zhang, C. Zhang, F. Zheng, Y. Zhao, Q. Huang, and S. Kwong, “Does thermal really always matter for rgb-t salient object detection?” IEEE TMM, 2022.
  93. Z. Tu, Z. Li, C. Li, Y. Lang, and J. Tang, “Multi-interactive dual-decoder for rgb-thermal salient object detection,” IEEE TIP, vol. 30, pp. 5678–5691, 2021.
  94. K. Fu, Y. Jiang, G.-P. Ji, T. Zhou, Q. Zhao, and D.-P. Fan, “Light field salient object detection: A review and benchmark,” CVMJ, pp. 1–26, 2022.
  95. W. Wang, J. Shen, and L. Shao, “Video salient object detection via fully convolutional networks,” IEEE TIP, vol. 27, no. 1, pp. 38–49, 2017.
  96. T.-N. Le and A. Sugimoto, “Deeply supervised 3d recurrent fcn for salient object detection in videos.” in BMVC, 2017.
  97. C. Chen, G. Wang, C. Peng, Y. Fang, D. Zhang, and H. Qin, “Exploring rich and efficient spatial temporal interactions for real-time video salient object detection,” IEEE TIP, vol. 30, pp. 3995–4007, 2021.
  98. T.-N. Le and A. Sugimoto, “Video salient object detection using spatiotemporal deep features,” IEEE TIP, vol. 27, no. 10, pp. 5002–5015, 2018.
  99. M. Zhang, J. Liu, Y. Wang, Y. Piao, S. Yao, W. Ji, J. Li, H. Lu, and Z. Luo, “Dynamic context-sensitive filtering network for video salient object detection,” in ICCV, 2021.
  100. G. Li, Y. Xie, T. Wei, K. Wang, and L. Lin, “Flow guided recurrent neural encoder for video salient object detection,” in CVPR, 2018.
  101. H. Song, W. Wang, S. Zhao, J. Shen, and K.-M. Lam, “Pyramid dilated deeper convlstm for video salient object detection,” in ECCV, 2018.
  102. G.-P. Ji, D.-P. Fan, K. Fu, Z. Wu, J. Shen, and L. Shao, “Full-duplex strategy for video object segmentation,” CVMJ, vol. 8, p. 155–175, 2022.
  103. H. Li, G. Chen, G. Li, and Y. Yu, “Motion guided attention for video salient object detection,” in ICCV, 2019.
  104. R. Cong, W. Song, J. Lei, G. Yue, Y. Zhao, and S. Kwong, “Psnet: Parallel symmetric network for video salient object detection,” IEEE TETCI, vol. 7, no. 2, pp. 402–414, 2023.
  105. D.-P. Fan, W. Wang, M.-M. Cheng, and J. Shen, “Shifting more attention to video salient object detection,” in CVPR, 2019.
  106. X.-J. Luo, S. Wang, Z. Wu, C. Sakaridis, Y. Cheng, D.-P. Fan, and L. Van Gool, “Camdiff: Camouflage image augmentation via diffusion model,” arXiv preprint arXiv:2304.05469, 2023.
  107. A. Li, J. Zhang, Y. Lv, B. Liu, T. Zhang, and Y. Dai, “Uncertainty-aware joint salient object and camouflaged object detection,” in CVPR, 2021.
  108. X. Qin, H. Dai, X. Hu, D.-P. Fan, L. Shao, and L. Van Gool, “Highly accurate dichotomous image segmentation,” in ECCV, 2022.
  109. J. Yan, T.-N. Le, K.-D. Nguyen, M.-T. Tran, T.-T. Do, and T. V. Nguyen, “Mirrornet: Bio-inspired camouflaged object segmentation,” IEEE Access, vol. 9, pp. 43 290–43 300, 2021.
  110. M. Xiang, J. Zhang, Y. Lv, A. Li, Y. Zhong, and Y. Dai, “Exploring depth contribution for camouflaged object detection,” arXiv preprint arXiv:2106.13217, 2021.
  111. K. Wang, H. Bi, Y. Zhang, C. Zhang, Z. Liu, and S. Zheng, “D22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPTc-net: A dual-branch, dual-guidance and cross-refine network for camouflaged object detection,” IEEE TIE., vol. 69, no. 5, pp. 5364–5374, 2022.
  112. Y. Sun, G. Chen, T. Zhou, Y. Zhang, and N. Liu, “Context-aware cross-level fusion network for camouflaged object detection,” in IJCAI, 2021.
  113. N. Kajiura, H. Liu, and S. Satoh, “Improving camouflaged object detection with the uncertainty of pseudo-edge labels,” in ACM MM Asia, 2021.
  114. J. Zhu, X. Zhang, S. Zhang, and J. Liu, “Inferring camouflaged objects by texture-aware interactive guidance network,” in AAAI, 2021.
  115. Q. Zhai, X. Li, F. Yang, C. Chen, H. Cheng, and D.-P. Fan, “Mutual graph learning for camouflaged object detection,” in CVPR, 2021.
  116. F. Yang, Q. Zhai, X. Li, R. Huang, A. Luo, H. Cheng, and D.-P. Fan, “Uncertainty-guided transformer reasoning for camouflaged object detection,” in ICCV, 2021.
  117. X. Qin, D.-P. Fan, C. Huang, C. Diagne, Z. Zhang, A. C. Sant’Anna, A. Suarez, M. Jagersand, and L. Shao, “Boundary-aware segmentation network for mobile and web applications,” arXiv preprint arXiv:2101.04704, 2022.
  118. C. Zhang, K. Wang, H. Bi, Z. Liu, and L. Yang, “Camouflaged object detection via neighbor connection and hierarchical information transfer,” CVIU, vol. 221, p. 103450, 2022.
  119. W. Zhai, Y. Cao, H. Xie, and Z.-J. Zha, “Deep texton-coherence network for camouflaged object detection,” IEEE TMM, 2022.
  120. G. Chen, S.-J. Liu, Y.-J. Sun, G.-P. Ji, Y.-F. Wu, and T. Zhou, “Camouflaged object detection via context-aware cross-level fusion,” IEEE TCSVT, vol. 32, no. 10, pp. 6981–6993, 2022.
  121. M. Zhuge, X. Lu, Y. Guo, Z. Cai, and S. Chen, “Cubenet: X-shape connection for camouflaged object detection,” PR, vol. 127, p. 108644, 2022.
  122. G.-P. Ji, L. Zhu, M. Zhuge, and K. Fu, “Fast camouflaged object detection via edge-based reversible re-calibration network,” PR, vol. 123, p. 108414, 2022.
  123. Q. Zhang, Y. Ge, C. Zhang, and H. Bi, “Tprnet: camouflaged object detection via transformer-induced progressive refinement network,” TVCJ, pp. 1–15, 2022.
  124. Y. Cheng, H.-Z. Hao, Y. Ji, Y. Li, and C.-P. Liu, “Attention-based neighbor selective aggregation network for camouflaged object detection,” in IJCNN, 2022.
  125. H. Zhu, P. Li, H. Xie, X. Yan, D. Liang, D. Chen, M. Wei, and J. Qin, “I can find you! boundary-guided separated attention network for camouflaged object detection,” in AAAI, 2022.
  126. T. Zhou, Y. Zhou, C. Gong, J. Yang, and Y. Zhang, “Feature aggregation and propagation network for camouflaged object detection,” IEEE TIP, vol. 31, pp. 7036–7047, 2022.
  127. P. Li, X. Yan, H. Zhu, M. Wei, X.-P. Zhang, and J. Qin, “Findnet: Can you find me? boundary-and-texture enhancement network for camouflaged object detection,” IEEE TIP, vol. 31, pp. 6396–6411, 2022.
  128. M.-C. Chou, H.-J. Chen, and H.-H. Shuai, “Finding the achilles heel: Progressive identification network for camouflaged object detection,” in ICME, 2022.
  129. J. Liu, J. Zhang, and N. Barnes, “Modeling aleatoric uncertainty for camouflaged object detection,” in WACV, 2022.
  130. Y. Sun, S. Wang, C. Chen, and T.-Z. Xiang, “Boundary-guided camouflaged object detection,” in IJCAI, 2022.
  131. M. Zhang, S. Xu, Y. Piao, D. Shi, S. Lin, and H. Lu, “Preynet: Preying on camouflaged objects,” in ACM MM, 2022.
  132. Z. Liu, Z. Zhang, Y. Tan, and W. Wu, “Boosting camouflaged object detection with dual-task interactive transformer,” in ICPR, 2022.
  133. Y. Pang, X. Zhao, T.-Z. Xiang, L. Zhang, and H. Lu, “Zoom in and out: A mixed-scale triplet network for camouflaged object detection,” in CVPR, 2022.
  134. Y. Zhong, B. Li, L. Tang, S. Kuang, S. Wu, and S. Ding, “Detecting camouflaged object in frequency domain,” in CVPR, 2022.
  135. Q. Jia, S. Yao, Y. Liu, X. Fan, R. Liu, and Z. Luo, “Segment, magnify and reiterate: Detecting camouflaged objects the hard way,” in CVPR, 2022.
  136. Q. Zhai, X. Li, F. Yang, Z. Jiao, P. Luo, H. Cheng, and Z. Liu, “Mgl: Mutual graph learning for camouflaged object detection,” IEEE TIP, vol. 32, pp. 1897–1910, 2023.
  137. J. Lin, X. Tan, K. Xu, L. Ma, and R. W. Lau, “Frequency-aware camouflaged object detection,” ACM TMCCA, vol. 19, no. 2, pp. 1–16, 2023.
  138. J. Ren, X. Hu, L. Zhu, X. Xu, Y. Xu, W. Wang, Z. Deng, and P.-A. Heng, “Deep texture-aware features for camouflaged object detection,” IEEE TCSVT, vol. 33, no. 3, pp. 1157–1167, 2023.
Citations (54)

Summary

Advances in Deep Concealed Scene Understanding

This paper, titled "Advances in Deep Concealed Scene Understanding," offers an exhaustive examination of recent developments in the field, specifically focusing on concealed scene understanding (CSU). CSU is a challenging domain within computer vision aimed at detecting camouflaged objects and environments, which has garnered significant attention due to its numerous applications across fields such as safety, healthcare, agriculture, and content creation.

Overview of Contributions

The paper makes several noteworthy contributions to the CSU community:

  1. Comprehensive Survey: The authors present a detailed survey of CSU methodologies, underscoring the importance of deep learning frameworks. This survey encapsulates current techniques, task-specific challenges, and ongoing developments, providing a taxonomy that classifies the myriad approaches in the field.
  2. Benchmark for Concealed Object Segmentation: The authors introduce the largest and most recent benchmark for concealed object segmentation (COS), a pivotal task within CSU. This benchmark serves as a cornerstone for evaluating state-of-the-art techniques under consistent and reliable settings.
  3. Concealed Defect Segmentation Dataset: A novel contribution in this paper is the CDS2K dataset, which targets the concealed defect segmentation task by incorporating challenging samples from various industrial settings. This dataset is intended to facilitate the application of CSU techniques to real-world scenarios and serves as an essential benchmark for evaluating the robustness and generalizability of deep learning models in practical contexts.
  4. Discussion of Open Problems and Future Directions: The paper concludes with an exploration of open problems and potential avenues for future research, such as unsolved issues in CSU, the interaction between ongoing interdisciplinary efforts, and how benchmarks can drive further progress.

Key Findings and Analysis

The paper identifies several aspects where deep learning techniques have significantly contributed to CSU. A variety of network architectures, such as multi-stream frameworks and transformer-based models, demonstrate the ability to capture diverse features necessary for CSU tasks. Despite these advancements, typical deep learning techniques remain data-hungry and resource-intensive, highlighting the need for data-efficient strategies. Furthermore, the benchmark analyses reveal the challenges posed by complex CSU tasks that require more sophisticated interactions between various methods developed across different computer vision tasks.

Implications and Future Prospects

Practically, the implications of the research presented in this paper suggest the pressing need for larger, more diverse datasets, as well as better training strategies that address data scarcity. Theoretically, the insights derived from this comprehensive paper point towards potential synergies that could be harnessed through integrating domain-specific knowledge with cutting-edge machine learning advancements. Additionally, the promising impact of transformer models and advanced data augmentation strategies like AI-generated content (AIGC) suggests trends that future research should explore.

In sum, this paper serves not only as a retrospective summary of progress in the field of CSU but also as a forward-looking discourse on the direction that future research should take. It aims to inspire the CSU community to continue striving for more robust, generalizable, and efficient solutions to the rich and complex challenges presented by concealed scenes in diverse applications.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube