Aligning Non-Causal Factors for Transformer-Based Source-Free Domain Adaptation (2311.16294v1)
Abstract: Conventional domain adaptation algorithms aim to achieve better generalization by aligning only the task-discriminative causal factors between a source and target domain. However, we find that retaining the spurious correlation between causal and non-causal factors plays a vital role in bridging the domain gap and improving target adaptation. Therefore, we propose to build a framework that disentangles and supports causal factor alignment by aligning the non-causal factors first. We also investigate and find that the strong shape bias of vision transformers, coupled with its multi-head attention, make it a suitable architecture for realizing our proposed disentanglement. Hence, we propose to build a Causality-enforcing Source-Free Transformer framework (C-SFTrans) to achieve disentanglement via a novel two-stage alignment approach: a) non-causal factor alignment: non-causal factors are aligned using a style classification task which leads to an overall global alignment, b) task-discriminative causal factor alignment: causal factors are aligned via target adaptation. We are the first to investigate the role of vision transformers (ViTs) in a privacy-preserving source-free setting. Our approach achieves state-of-the-art results in several DA benchmarks.
- A theory of learning from different domains. Machine learning, 79(1-2):151–175, 2010.
- Analysis of representations for domain adaptation. 2006.
- Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI, 2020.
- Unsupervised domain adaptation for semantic segmentation via low-level edge information transfer. arXiv preprint arXiv:2109.08912, 2021.
- The principle of diversity: Training stronger vision transformers calls for reducing all levels of redundancy. In CVPR, 2022.
- The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
- An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
- Cross-domain gradient discrepancy minimization for unsupervised domain adaptation. In CVPR, 2021.
- Unsupervised domain adaptation by backpropagation. In ICML, 2015.
- Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030, 2016.
- Reducing distributional uncertainty by mutual information maximisation and transferable feature learning. In ECCV, 2020.
- Deep residual learning for image recognition. In CVPR, 2016.
- CyCADA: Cycle-consistent adversarial domain adaptation. In ICML, 2018.
- Duplex generative adversarial network for unsupervised domain adaptation. In CVPR, 2018.
- Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
- Style augmentation: Data augmentation via style randomization. In CVPR Workshops, 2019.
- Jung. imgaug. In https://github.com/aleju/imgaug, 2020.
- Transformers in vision: A survey. ACM Computing Surveys (CSUR), 2021.
- Learning texture invariant representation for domain adaptation of semantic segmentation. In CVPR, 2020.
- Adam: A method for stochastic optimization. In ICLR, 2014.
- Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In ECCV, 2022.
- Balancing discriminability and transferability for source-free domain adaptation. In ICML, 2022.
- Generalize then adapt: Source-free domain adaptive semantic segmentation. In ICCV, 2021.
- Universal source-free domain adaptation. In CVPR, 2020.
- Confidence score for source-free unsupervised domain adaptation. In ICML, 2022.
- Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML, 2020.
- Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
- Coupled generative adversarial networks. In NeurIPS, 2016.
- Conditional adversarial domain adaptation. In NeurIPS, 2017.
- Deep transfer learning with joint adaptation networks. In ICML, 2017.
- Causality inspired representation learning for domain generalization. In CVPR, 2022.
- Generalized domain adaptation. In CVPR, 2021.
- Intriguing properties of vision transformers. In NeurIPS, 2021.
- Unsupervised multi-target domain adaptation through knowledge distillation. In WACV, 2021.
- How do vision transformers work? In ICLR, 2022.
- Moment matching for multi-source domain adaptation. In ICCV, 2019.
- Domain agnostic learning with disentangled representations. In ICML, 2019.
- VisDA: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
- Efficient domain generalization via common-specific low-rank decomposition. In ICML, 2020.
- BMD: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In ECCV, 2022.
- Do vision transformers see like convolutional neural networks? In NeurIPS, 2021.
- Curriculum graph co-teaching for multi-target domain adaptation. In CVPR, 2021.
- Uncertainty-guided source-free domain adaptation. In ECCV, 2022.
- From source to target and back: symmetric bi-directional adaptive gan. In CVPR, 2018.
- Adapting visual category models to new domains. In ECCV, 2010.
- Computer vision for security applications. In Proceedings IEEE 32nd Annual 1998 International Carnahan Conference on Security Technology (Cat. No.98CH36209), pages 210–215, 1998.
- Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, 2018.
- Generate to adapt: Aligning domains using generative adversarial networks. In CVPR, 2018.
- Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021.
- Safe self-refinement for transformer-based domain adaptation. In CVPR, 2022.
- Recovering latent causal factor for generalization to distributional shifts. In NeurIPS, 2021.
- Training data-efficient image transformers & distillation through attention. In ICML, 2021.
- Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, 2017.
- Attention is all you need. In NeurIPS, 2017.
- Deep hashing network for unsupervised domain adaptation. In CVPR, 2017.
- Exploring domain-invariant parameters for source free domain adaptation. In CVPR, 2022.
- Contrastive-ACE: Domain generalization through alignment of causal mechanisms. IEEE Transactions on Image Processing, 32:235–250, 2022.
- Adaptive adversarial network for source-free domain adaptation. In ICCV, 2021.
- CDTrans: Cross-domain transformer for unsupervised domain adaptation. In ICLR, 2022.
- Transformer-based source-free domain adaptation. In APIN, 2022.
- TVT: Transferable vision transformer for unsupervised domain adaptation. In WACV, 2023.
- Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In NeurIPS, 2021.
- Generalized source-free domain adaptation. In ICCV, 2021.
- FDA: Fourier domain adaptation for semantic segmentation. In CVPR, 2020.
- Alleviating style sensitivity then adapting: Source-free domain adaptation for medical image segmentation. In ACMMM, 2022.
- Bridging theory and algorithm for domain adaptation. In ICML, 2019.