Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization (2306.04911v2)

Published 8 Jun 2023 in cs.CV and cs.AI

Abstract: In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
  2. Domain generalization by marginal transfer learning. arXiv preprint arXiv:1711.07910, 2017.
  3. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 32, 2019.
  4. Swad: Domain generalization by seeking flat minima. Advances in Neural Information Processing Systems, 34, 2021.
  5. Compound domain generalization via meta-knowledge encoding. arXiv preprint arXiv:2203.13006, 2022.
  6. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  9268–9277, 2019.
  7. Learning to learn with variational information bottleneck for domain generalization. In European Conference on Computer Vision, pp.  200–216. Springer, 2020.
  8. A learned representation for artistic style. 2017.
  9. Robust domain generalisation by enforcing distribution invariance. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), pp.  1455–1461. AAAI Press, 2016.
  10. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In Proceedings of the IEEE International Conference on Computer Vision, pp.  1657–1664, 2013.
  11. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  12. In search of lost domain generalization. In International Conference on Learning Representations, 2021.
  13. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), pp.  1322–1328. IEEE, 2008.
  14. Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  5375–5384, 2016.
  15. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pp.  1501–1510, 2017.
  16. Self-challenging improves cross-domain generalization. In European Conference on Computer Vision, pp.  124–140. Springer, 2020.
  17. Test-time classifier adjustment module for model-agnostic domain generalization. Advances in Neural Information Processing Systems, 34, 2021.
  18. Style neophile: Constantly seeking novel styles for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  7130–7140, 2022.
  19. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815–5826. PMLR, 2021.
  20. On feature normalization and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12383–12392, 2021.
  21. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pp.  5542–5550, 2017.
  22. Learning to generalize: Meta-learning for domain generalization. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018a.
  23. Episodic training for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  1446–1455, 2019.
  24. Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  5400–5409, 2018b.
  25. Uncertainty modeling for out-of-distribution generalization. In International Conference on Learning Representations, 2022.
  26. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp.  624–639, 2018c.
  27. Multi-camera activity correlation analysis. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp.  1988–1995. IEEE, 2009.
  28. Reducing domain gap by reducing style bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  8690–8699, 2021.
  29. Permuted adain: reducing the bias towards global statistics in image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9482–9491, 2021.
  30. Generalization on unseen domains via inference-time label-preserving target projections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12924–12933, 2021.
  31. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  1406–1415, 2019.
  32. Dynamic sampling in convolutional neural networks for imbalanced data classification. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR), pp.  112–117. IEEE, 2018.
  33. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
  34. Generalizing across domains via cross-gradient training. In International Conference on Learning Representations, 2018.
  35. Meta-weight-net: Learning an explicit mapping for sample weighting. Advances in neural information processing systems, 32, 2019.
  36. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pp.  443–450. Springer, 2016.
  37. Test-time training with self-supervision for generalization under distribution shifts. In International Conference on Machine Learning, pp. 9229–9248. PMLR, 2020.
  38. Vapnik, V. N. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988–999, 1999.
  39. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  5018–5027, 2017.
  40. Tent: Fully test-time adaptation by entropy minimization. In International Conference on Learning Representations, 2020.
  41. Learning to generalize across domains on single test samples. arXiv preprint arXiv:2202.08045, 2022.
  42. On multi-domain long-tailed recognition, generalization and beyond. arXiv preprint arXiv:2203.09513, 2022.
  43. Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  2100–2110, 2019.
  44. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
  45. Adaptive risk minimization: A meta-learning approach for tackling group shift. 2020.
  46. Exact feature distribution matching for arbitrary style transfer and domain generalization. arXiv preprint arXiv:2203.07740, 2022.
  47. Test-time fourier style calibration for domain generalization. arXiv preprint arXiv:2205.06427, 2022.
  48. Learning to generalize unseen domains via memory-based multi-source meta-learning for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  6277–6286, 2021.
  49. Scalable person re-identification: A benchmark. In Proceedings of the IEEE international conference on computer vision, pp.  1116–1124, 2015.
  50. Omni-scale feature learning for person re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  3702–3712, 2019.
  51. Learning to generate novel domains for domain generalization. In European conference on computer vision, pp.  561–578. Springer, 2020.
  52. Domain generalization with mixstyle. In International Conference on Learning Representations, 2021.
Citations (9)

Summary

We haven't generated a summary for this paper yet.