Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Style-Hallucinated Dual Consistency Learning: A Unified Framework for Visual Domain Generalization (2212.09068v2)

Published 18 Dec 2022 in cs.CV

Abstract: Domain shift widely exists in the visual world, while modern deep neural networks commonly suffer from severe performance degradation under domain shift due to the poor generalization ability, which limits the real-world applications. The domain shift mainly lies in the limited source environmental variations and the large distribution gap between source and unseen target data. To this end, we propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle such domain shift in various visual tasks. Specifically, SHADE is constructed based on two consistency constraints, Style Consistency (SC) and Retrospection Consistency (RC). SC enriches the source situations and encourages the model to learn consistent representation across style-diversified samples. RC leverages general visual knowledge to prevent the model from overfitting to source data and thus largely keeps the representation consistent between the source and general visual models. Furthermore, we present a novel style hallucination module (SHM) to generate style-diversified samples that are essential to consistency learning. SHM selects basis styles from the source distribution, enabling the model to dynamically generate diverse and realistic samples during training. Extensive experiments demonstrate that our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection, with different models, i.e., ConvNets and Transformer.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. Halmos PR (1987) Finite-dimensional vector spaces. Springer
  2. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV
  3. Loshchilov I, Hutter F (2019) Decoupled weight decay regularization. In: ICLR
  4. MacQueen J (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability
  5. Tarvainen A, Valpola H (2017) Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: NeurIPS
  6. Vapnik V (2013) The nature of statistical learning theory. Springer science & business media
  7. Wu A, Deng C (2022) Single-domain generalized object detection in urban scene via cyclic-disentangled self-distillation. In: CVPR
  8. Zheng Z, Yang Y (2020) Unsupervised scene adaptation with memory regularization in vivo. In: IJCAI
  9. Zheng Z, Yang Y (2021) Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. IJCV
  10. Zheng Z, Yang Y (2022) Adaptive boosting for domain adaptation: Toward robust predictions in scene segmentation. IEEE TIP
Citations (20)

Summary

We haven't generated a summary for this paper yet.