Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Aligning Non-Causal Factors for Transformer-Based Source-Free Domain Adaptation (2311.16294v1)

Published 27 Nov 2023 in cs.CV

Abstract: Conventional domain adaptation algorithms aim to achieve better generalization by aligning only the task-discriminative causal factors between a source and target domain. However, we find that retaining the spurious correlation between causal and non-causal factors plays a vital role in bridging the domain gap and improving target adaptation. Therefore, we propose to build a framework that disentangles and supports causal factor alignment by aligning the non-causal factors first. We also investigate and find that the strong shape bias of vision transformers, coupled with its multi-head attention, make it a suitable architecture for realizing our proposed disentanglement. Hence, we propose to build a Causality-enforcing Source-Free Transformer framework (C-SFTrans) to achieve disentanglement via a novel two-stage alignment approach: a) non-causal factor alignment: non-causal factors are aligned using a style classification task which leads to an overall global alignment, b) task-discriminative causal factor alignment: causal factors are aligned via target adaptation. We are the first to investigate the role of vision transformers (ViTs) in a privacy-preserving source-free setting. Our approach achieves state-of-the-art results in several DA benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. A theory of learning from different domains. Machine learning, 79(1-2):151–175, 2010.
  2. Analysis of representations for domain adaptation. 2006.
  3. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI, 2020.
  4. Unsupervised domain adaptation for semantic segmentation via low-level edge information transfer. arXiv preprint arXiv:2109.08912, 2021.
  5. The principle of diversity: Training stronger vision transformers calls for reducing all levels of redundancy. In CVPR, 2022.
  6. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
  7. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  8. Cross-domain gradient discrepancy minimization for unsupervised domain adaptation. In CVPR, 2021.
  9. Unsupervised domain adaptation by backpropagation. In ICML, 2015.
  10. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030, 2016.
  11. Reducing distributional uncertainty by mutual information maximisation and transferable feature learning. In ECCV, 2020.
  12. Deep residual learning for image recognition. In CVPR, 2016.
  13. CyCADA: Cycle-consistent adversarial domain adaptation. In ICML, 2018.
  14. Duplex generative adversarial network for unsupervised domain adaptation. In CVPR, 2018.
  15. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
  16. Style augmentation: Data augmentation via style randomization. In CVPR Workshops, 2019.
  17. Jung. imgaug. In https://github.com/aleju/imgaug, 2020.
  18. Transformers in vision: A survey. ACM Computing Surveys (CSUR), 2021.
  19. Learning texture invariant representation for domain adaptation of semantic segmentation. In CVPR, 2020.
  20. Adam: A method for stochastic optimization. In ICLR, 2014.
  21. Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In ECCV, 2022.
  22. Balancing discriminability and transferability for source-free domain adaptation. In ICML, 2022.
  23. Generalize then adapt: Source-free domain adaptive semantic segmentation. In ICCV, 2021.
  24. Universal source-free domain adaptation. In CVPR, 2020.
  25. Confidence score for source-free unsupervised domain adaptation. In ICML, 2022.
  26. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML, 2020.
  27. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  28. Coupled generative adversarial networks. In NeurIPS, 2016.
  29. Conditional adversarial domain adaptation. In NeurIPS, 2017.
  30. Deep transfer learning with joint adaptation networks. In ICML, 2017.
  31. Causality inspired representation learning for domain generalization. In CVPR, 2022.
  32. Generalized domain adaptation. In CVPR, 2021.
  33. Intriguing properties of vision transformers. In NeurIPS, 2021.
  34. Unsupervised multi-target domain adaptation through knowledge distillation. In WACV, 2021.
  35. How do vision transformers work? In ICLR, 2022.
  36. Moment matching for multi-source domain adaptation. In ICCV, 2019.
  37. Domain agnostic learning with disentangled representations. In ICML, 2019.
  38. VisDA: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
  39. Efficient domain generalization via common-specific low-rank decomposition. In ICML, 2020.
  40. BMD: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In ECCV, 2022.
  41. Do vision transformers see like convolutional neural networks? In NeurIPS, 2021.
  42. Curriculum graph co-teaching for multi-target domain adaptation. In CVPR, 2021.
  43. Uncertainty-guided source-free domain adaptation. In ECCV, 2022.
  44. From source to target and back: symmetric bi-directional adaptive gan. In CVPR, 2018.
  45. Adapting visual category models to new domains. In ECCV, 2010.
  46. Computer vision for security applications. In Proceedings IEEE 32nd Annual 1998 International Carnahan Conference on Security Technology (Cat. No.98CH36209), pages 210–215, 1998.
  47. Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, 2018.
  48. Generate to adapt: Aligning domains using generative adversarial networks. In CVPR, 2018.
  49. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021.
  50. Safe self-refinement for transformer-based domain adaptation. In CVPR, 2022.
  51. Recovering latent causal factor for generalization to distributional shifts. In NeurIPS, 2021.
  52. Training data-efficient image transformers & distillation through attention. In ICML, 2021.
  53. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, 2017.
  54. Attention is all you need. In NeurIPS, 2017.
  55. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017.
  56. Exploring domain-invariant parameters for source free domain adaptation. In CVPR, 2022.
  57. Contrastive-ACE: Domain generalization through alignment of causal mechanisms. IEEE Transactions on Image Processing, 32:235–250, 2022.
  58. Adaptive adversarial network for source-free domain adaptation. In ICCV, 2021.
  59. CDTrans: Cross-domain transformer for unsupervised domain adaptation. In ICLR, 2022.
  60. Transformer-based source-free domain adaptation. In APIN, 2022.
  61. TVT: Transferable vision transformer for unsupervised domain adaptation. In WACV, 2023.
  62. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In NeurIPS, 2021.
  63. Generalized source-free domain adaptation. In ICCV, 2021.
  64. FDA: Fourier domain adaptation for semantic segmentation. In CVPR, 2020.
  65. Alleviating style sensitivity then adapting: Source-free domain adaptation for medical image segmentation. In ACMMM, 2022.
  66. Bridging theory and algorithm for domain adaptation. In ICML, 2019.
Citations (1)

Summary

We haven't generated a summary for this paper yet.