Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Source-Free Domain Adaptation (2403.07601v1)

Published 12 Mar 2024 in cs.CV

Abstract: In the pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored across various scenarios, including closed-set, open-set, partial-set, and generalized settings. Existing methods, focusing on specific scenarios, not only address only a subset of challenges but also necessitate prior knowledge of the target domain, significantly limiting their practical utility and deployability. In light of these considerations, we introduce a more practical yet challenging problem, termed unified SFDA, which comprehensively incorporates all specific scenarios in a unified manner. To tackle this unified SFDA problem, we propose a novel approach called Latent Causal Factors Discovery (LCFD). In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective. The objective is to uncover the causal relationships between latent variables and model decisions, enhancing the reliability and robustness of the learned model against domain shifts. To integrate extensive world knowledge, we leverage a pre-trained vision-LLM such as CLIP. This aids in the formation and discovery of latent causal factors in the absence of supervision in the variation of distribution and semantics, coupled with a newly designed information bottleneck with theoretical guarantees. Extensive experiments demonstrate that LCFD can achieve new state-of-the-art results in distinct SFDA settings, as well as source-free out-of-distribution generalization.Our code and data are available at https://github.com/tntek/source-free-domain-adaptation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Y. Kim, D. Cho, K. Han, P. Panda, and S. Hong, “Domain adaptation without source data,” IEEE Trans. on Artif. Intell., vol. 2, no. 6, pp. 508–518, 2021.
  2. R. Li, Q. Jiao, W. Cao, H.-S. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2020, pp. 9641–9650.
  3. S. Yang, Y. Wang, J. Van De Weijer, L. Herranz, and S. Jui, “Generalized source-free domain adaptation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2021, pp. 8978–8987.
  4. P. Panareda Busto and J. Gall, “Open set domain adaptation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 754–763.
  5. Z. Cao, K. You, M. Long, J. Wang, and Q. Yang, “Learning to transfer examples for partial domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 2985–2994.
  6. N. Ding, Y. Xu, Y. Tang, C. Xu, Y. Wang, and D. Tao, “Source-free domain adaptation via distribution estimation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 7212–7222.
  7. Y. Luo, Z. Wang, Z. Chen, Z. Huang, and M. Baktashmotlagh, “Source-free progressive graph learning for open-set domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
  8. S. Tang, Y. Shi, Z. Song, M. Ye, C. Zhang, and J. Zhang, “Progressive source-aware transformer for generalized source-free domain adaptation,” IEEE Trans. on Multimedia, 2023, dOI: https://doi.org/10.1109/TMM.2023.3321421.
  9. Y. Zhang, Z. Wang, and W. He, “Class relationship embedded learning for source-free unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2023, pp. 7619–7629.
  10. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning transferable visual models from natural language supervision,” in Proceedings of the Int. Conf. Mach. Learn. (ICML).   PMLR, 2021, pp. 8748–8763.
  11. R. Li, Q. Jiao, W. Cao, H. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without ource data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2020, pp. 9638–9647.
  12. J. Tian, J. Zhang, W. Li, and D. Xu, “VDM-DA: Virtual domain modeling for source data-free domain adaptation,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 6, pp. 3749–3760, 2021.
  13. Y. Du, H. Yang, M. Chen, J. Jiang, H. Luo, and C. Wang, “Generation, augmentation, and alignment: A pseudo-source domain based method for source-free domain adaptation,” arXiv:2109.04015, 2021.
  14. J. Liang, D. Hu, and J. Feng, “Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation,” in Proceedings of the Int. Conf. Mach. Learn. (ICML), 2020, pp. 6028–6039.
  15. W. Chen, L. Lin, S. Yang, D. Xie, S. Pu, and Y. Zhuang, “Self-supervised noisy label learning for source-free unsupervised domain adaptation,” in Proc. IEEE Int. Intell. Rob. Syst. (IROS).   IEEE, 2022, pp. 10 185–10 192.
  16. M. Litrico, A. Del Bue, and P. Morerio, “Guiding pseudo-labels with uncertainty estimation for test-time adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2023.
  17. S. Yang, J. van de Weijer, L. Herranz, S. Jui, et al., “Exploiting the intrinsic neighborhood structure for source-free domain adaptation,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 34, pp. 29 393–29 405, 2021.
  18. S. Yang, Y. Wang, J. van de Weijer, L. Herranz, S. Jui, and J. Yang, “Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering,” IEEE rans. Pattern Anal. Mach. Intell., 2023.
  19. S. Tang, Y. Zou, Z. Song, J. Lyu, L. Chen, M. Ye, S. Zhong, and J. Zhang, “Semantic consistency learning on manifold for source data-free unsupervised domain adaptation,” Neural Networks, vol. 152, pp. 467–478, 2022.
  20. W. Li and S. Chen, “Partial domain adaptation without domain alignment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  21. G. Vray, D. Tomar, B. Bozorgtabar, and J.-P. Thiran, “Distill-soda: Distilling self-supervised vision transformer for source-free open-set domain adaptation in computational pathology,” IEEE Transactions on Medical Imaging, 2024.
  22. J. Lee, D. Jung, J. Yim, and S. Yoon, “Confidence score for source-free unsupervised domain adaptation,” in Int. Conf. Mach. Learn. (ICML).   PMLR, 2022, pp. 12 365–12 377.
  23. J. H. A. Samadh, H. Gani, N. H. Hussein, M. U. Khattak, M. Naseer, F. Khan, and S. Khan, “Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  24. S. Tang, Y. Shi, Z. Ma, J. Li, J. Lyu, Q. Li, and J. Zhang, “Model adaptation through hypothesis transfer with gradual knowledge distillation,” in Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst. (IROS).   IEEE, 2021, pp. 5679–5685.
  25. M. Shu, W. Nie, D.-A. Huang, Z. Yu, T. Goldstein, A. Anandkumar, and C. Xiao, “Test-time prompt tuning for zero-shot generalization in vision-language models,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 35, pp. 14 274–14 289, 2022.
  26. B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio, “Toward causal representation learning,” Proceedings of the IEEE, vol. 109, no. 5, pp. 612–634, 2021.
  27. Y. Chen and P. Bühlmann, “Domain adaptation under structural causal models,” J. Mach. Learn. Resear. (JMLR), vol. 22, no. 1, pp. 11 856–11 935, 2021.
  28. R. Christiansen, N. Pfister, M. E. Jakobsen, N. Gnecco, and J. Peters, “A causal framework for distribution generalization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 6614–6630, 2021.
  29. W. Wang, X. Lin, F. Feng, X. He, M. Lin, and T.-S. Chua, “Causal representation learning for out-of-distribution recommendation,” in Proc. ACM Web Conf. (WWW), 2022, pp. 3562–3571.
  30. C. Ouyang, C. Chen, S. Li, Z. Li, C. Qin, W. Bai, and D. Rueckert, “Causality-inspired single-source domain generalization for medical image segmentation,” IEEE Trans. Med. Imag. (TMI), vol. 42, no. 4, pp. 1095–1106, 2022.
  31. F. Lv, J. Liang, S. Li, B. Zang, C. H. Liu, Z. Wang, and D. Liu, “Causality inspired representation learning for domain generalization,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 8046–8056.
  32. Z. Yue, Q. Sun, X.-S. Hua, and H. Zhang, “Transporting causal mechanisms for unsupervised domain adaptation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2021, pp. 8599–8608.
  33. R. Wang, M. Yi, Z. Chen, and S. Zhu, “Out-of-distribution generalization with causal invariant transformations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 375–385.
  34. D. Mahajan, S. Tople, and A. Sharma, “Domain generalization using causal matching,” in Int. Conf. Mach. Learn. (ICML).   PMLR, 2021, pp. 7313–7324.
  35. K. Kawaguchi, Z. Deng, X. Ji, and J. Huang, “How does information bottleneck help deep learning?” in Proc. Int. Conf. Mach. Learn. (ICML).   PMLR, 2023, p. 16049–16096.
  36. N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” arXiv preprint physics/0004057, 2000.
  37. R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, and P. Abbeel, “Vime: Variational information maximizing exploration,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 29, 2016.
  38. D. Barber and F. Agakov, “The im algorithm: a variational approach to information maximization,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 16, no. 320, p. 201, 2004.
  39. X. Ji, J. F. Henriques, and A. Vedaldi, “Invariant information clustering for unsupervised image classification and segmentation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2019, pp. 9865–9874.
  40. B. Li, Y. Shen, Y. Wang, W. Zhu, D. Li, K. Keutzer, and H. Zhao, “Invariant information bottleneck for domain generalization,” in Proc. AAAI Conf. Artif. Intell. (AAAI), vol. 36, no. 7, 2022, pp. 7399–7407.
  41. K. Ghasedi Dizaji, A. Herandi, C. Deng, W. Cai, and H. Huang, “Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization,” in Proc. IEEE Conf. Int. Conf. Comput. Vis. (ICCV), 2017, pp. 5736–5745.
  42. K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” Int. J. Comput. Vis., vol. 130, no. 9, pp. 2337–2348, 2022.
  43. H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2017, pp. 5385–5394.
  44. X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “Visda: The visual domain adaptation challenge,” arXiv:1710.06924, 2017.
  45. X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2019, pp. 1406–1415.
  46. C. Ge, R. Huang, M. Xie, Z. Lai, S. Song, S. Li, and G. Huang, “Domain adaptation via prompt learning,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  47. Z. Lai, N. Vesdapunt, N. Zhou, J. Wu, C. P. Huynh, X. Li, K. K. Fu, and C.-N. Chuah, “PADCLIP: Pseudo-labeling with adaptive debiasing in clip for unsupervised domain adaptation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2023, pp. 16 155–16 165.
  48. M. Singha, H. Pal, A. Jha, and B. Banerjee, “Ad-clip: Adapting domains in prompt space using clip,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2023, pp. 4355–4364.
  49. D. Chen, D. Wang, T. Darrell, and S. Ebrahimi, “Contrastive test-time adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 295–305.
  50. S. Tang, A. Chang, F. Zhang, X. Zhu, M. Ye, and Z. Changshui, “Source-free domain adaptation via target prediction distribution searching,” Int. J. Comput. Vis., 2023, dOI: https://doi.org/10.1007/s11263-023-01892-w.
  51. K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, et al., “A survey on vision transformer,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 87–110, 2022.
  52. J. Huang, D. Guan, A. Xiao, and S. Lu, “Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 34, pp. 3635–3649, 2021.
  53. S. Yang, S. Jui, J. van de Weijer, et al., “Attracting and dispersing: A simple approach for source-free domain adaptation,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 35, pp. 5802–5815, 2022.
  54. B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do imagenet classifiers generalize to imagenet?” in Int. Conf. Mach. Learn. (ICML).   PMLR, 2019, pp. 5389–5400.
  55. D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song, “Natural adversarial examples,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 15 262–15 271.
  56. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2016, pp. 770–778.
  57. D. Hendrycks, S. Basart, N. Mu, S. Kadavath, F. Wang, E. Dorundo, R. Desai, T. Zhu, S. Parajuli, M. Guo, et al., “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2021, pp. 8340–8349.
  58. H. Wang, S. Ge, Z. Lipton, and E. P. Xing, “Learning robust global representations by penalizing local predictive power,” Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 32, 2019.
  59. K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Conditional prompt learning for vision-language models,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 16 816–16 825.
  60. B. Zhu, Y. Niu, Y. Han, Y. Wu, and H. Zhang, “Prompt-aligned gradient for prompt tuning,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2023, pp. 15 659–15 669.
Citations (3)

Summary

We haven't generated a summary for this paper yet.