Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

D2SL: Decouple Defogging and Semantic Learning for Foggy Domain-Adaptive Segmentation (2404.04807v1)

Published 7 Apr 2024 in cs.CV and cs.MM

Abstract: We investigated domain adaptive semantic segmentation in foggy weather scenarios, which aims to enhance the utilization of unlabeled foggy data and improve the model's adaptability to foggy conditions. Current methods rely on clear images as references, jointly learning defogging and segmentation for foggy images. Despite making some progress, there are still two main drawbacks: (1) the coupling of segmentation and defogging feature representations, resulting in a decrease in semantic representation capability, and (2) the failure to leverage real fog priors in unlabeled foggy data, leading to insufficient model generalization ability. To address these issues, we propose a novel training framework, Decouple Defogging and Semantic learning, called D2SL, aiming to alleviate the adverse impact of defogging tasks on the final segmentation task. In this framework, we introduce a domain-consistent transfer strategy to establish a connection between defogging and segmentation tasks. Furthermore, we design a real fog transfer strategy to improve defogging effects by fully leveraging the fog priors from real foggy images. Our approach enhances the semantic representations required for segmentation during the defogging learning process and maximizes the representation capability of fog invariance by effectively utilizing real fog data. Comprehensive experiments validate the effectiveness of the proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. F. LastName, “The frobnicatable foo filter,” 2014, face and Gesture submission ID 324. Supplied as supplemental material fg324.pdf.
  2. ——, “Frobnication tutorial,” 2014, supplied as supplemental material tr.pdf.
  3. F. Alpher, “Frobnication,” IEEE TPAMI, vol. 12, no. 1, pp. 234–778, 2002.
  4. F. Alpher and F. Fotheringham-Smythe, “Frobnication revisited,” Journal of Foo, vol. 13, no. 1, pp. 234–778, 2003.
  5. F. Alpher, F. Fotheringham-Smythe, and F. Gamow, “Can a machine frobnicate?” Journal of Foo, vol. 14, no. 1, pp. 234–778, 2004.
  6. F. Alpher and F. Gamow, “Can a computer frobnicate?” in CVPR, 2005, pp. 234–778.
  7. Y. Zheng, J. Zhan, S. He, J. Dong, and Y. Du, “Curricular contrastive regularization for physics-aware single image dehazing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5785–5794.
  8. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915.
  9. C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5812–5820.
  10. Y. Liu, L. Zhu, S. Pei, H. Fu, J. Qin, Q. Zhang, L. Wan, and W. Feng, “From synthetic to real: Image dehazing collaborating with unlabeled real data,” in Proceedings of the 29th ACM international conference on multimedia, 2021, pp. 50–58.
  11. Z. Chen, Y. Wang, Y. Yang, and D. Liu, “Psd: Principled synthetic-to-real dehazing guided by physical priors,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 7180–7189.
  12. S. Li, D. Wu, F. Wu, Z. Zang, K. Wang, L. Shang, B. Sun, H. Li, S. Li et al., “Architecture-agnostic masked image modeling–from vit back to cnn,” arXiv preprint arXiv:2205.13943, 2022.
  13. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000–16 009.
  14. Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu, “Simmim: A simple framework for masked image modeling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9653–9663.
  15. J. Liu, X. Huang, Y. Liu, and H. Li, “Mixmim: Mixed and masked image modeling for efficient visual representation learning,” arXiv preprint arXiv:2205.13137, 2022.
  16. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  17. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012–10 022.
  18. K. Tian, Y. Jiang, Q. Diao, C. Lin, L. Wang, and Z. Yuan, “Designing bert for convolutional networks: Sparse and hierarchical masked modeling,” arXiv preprint arXiv:2301.03580, 2023.
  19. Y. Luo, P. Liu, T. Guan, J. Yu, and Y. Yang, “Significance-aware information bottleneck for domain adaptive semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6778–6787.
  20. Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7472–7481.
  21. H. Wang, T. Shen, W. Zhang, L.-Y. Duan, and T. Mei, “Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation,” in European conference on computer vision.   Springer, 2020, pp. 642–659.
  22. S. Lee, T. Son, and S. Kwak, “Fifo: Learning fog-invariant features for foggy scene segmentation. in 2022 ieee,” in CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 18 889–18 899.
  23. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
  24. N. Vladimir, S. Chunhua, and R. Ian, “Light-weight refinenet for real-time semantic segmentation,” in British Machine Vision Conference 2018, BMVC 2018, 2018, p. 125.
  25. G. Lin, A. Milan, C. Shen, and I. Reid, “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1925–1934.
  26. C. Sakaridis, D. Dai, S. Hecker, and L. Van Gool, “Model adaptation with synthetic and real data for semantic dense foggy scene understanding,” in Proceedings of the european conference on computer vision (ECCV), 2018, pp. 687–704.
  27. C. Sakaridis, D. Dai, and L. Van Gool, “Semantic foggy scene understanding with synthetic data,” International Journal of Computer Vision, vol. 126, pp. 973–992, 2018.
  28. D. Dai, C. Sakaridis, S. Hecker, and L. Van Gool, “Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding,” International Journal of Computer Vision, vol. 128, pp. 1182–1204, 2020.
  29. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  30. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  31. T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez, “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 2517–2526.
  32. Y. Yang and S. Soatto, “Fda: Fourier domain adaptation for semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 4085–4095.
  33. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The journal of machine learning research, vol. 17, no. 1, pp. 2096–2030, 2016.
  34. M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide, “Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 682–11 692.
  35. S. Choi, S. Jung, H. Yun, J. T. Kim, S. Kim, and J. Choo, “Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11 580–11 590.
  36. T. Son, J. Kang, N. Kim, S. Cho, and S. Kwak, “Urie: Universal image enhancement for visual recognition in the wild,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16.   Springer, 2020, pp. 749–765.
  37. Q. Wang, O. Fink, L. Van Gool, and D. Dai, “Continual test-time domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7201–7211.
  38. W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-adaptive yolo for object detection in adverse weather conditions,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 1792–1800.
  39. M. Li, B. Xie, S. Li, C. H. Liu, and X. Cheng, “Vblc: visibility boosting and logit-constraint learning for domain adaptive semantic segmentation under adverse conditions,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 7, 2023, pp. 8605–8613.

Summary

We haven't generated a summary for this paper yet.