Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting (2312.05024v1)

Published 8 Dec 2023 in cs.CV

Abstract: Despite the progress made in domain adaptation, solving Unsupervised Domain Adaptation (UDA) problems with a general method under complex conditions caused by label shifts between domains remains a formidable task. In this work, we comprehensively investigate four distinct UDA settings including closed set domain adaptation, partial domain adaptation, open set domain adaptation, and universal domain adaptation, where shared common classes between source and target domains coexist alongside domain-specific private classes. The prominent challenges inherent in diverse UDA settings center around the discrimination of common/private classes and the precise measurement of domain discrepancy. To surmount these challenges effectively, we propose a novel yet effective method called Learning Instance Weighting for Unsupervised Domain Adaptation (LIWUDA), which caters to various UDA settings. Specifically, the proposed LIWUDA method constructs a weight network to assign weights to each instance based on its probability of belonging to common classes, and designs Weighted Optimal Transport (WOT) for domain alignment by leveraging instance weights. Additionally, the proposed LIWUDA method devises a Separate and Align (SA) loss to separate instances with low similarities and align instances with high similarities. To guide the learning of the weight network, Intra-domain Optimal Transport (IOT) is proposed to enforce the weights of instances in common classes to follow a uniform distribution. Through the integration of those three components, the proposed LIWUDA method demonstrates its capability to address all four UDA settings in a unified manner. Experimental evaluations conducted on three benchmark datasets substantiate the effectiveness of the proposed LIWUDA method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   IEEE Computer Society, 2016, pp. 770–778.
  2. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, 2015, pp. 91–99.
  3. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015.   IEEE Computer Society, 2015, pp. 3431–3440.
  4. F. Yuan, L. Yao, and B. Benatallah, “Darec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, S. Kraus, Ed.   ijcai.org, 2019, pp. 4227–4233.
  5. J. Deng, Z. Zhang, F. Eyben, and B. W. Schuller, “Autoencoder-based unsupervised domain adaptation for speech emotion recognition,” IEEE Signal Process. Lett., vol. 21, no. 9, pp. 1068–1072, 2014.
  6. Y. Zou, Z. Yu, B. V. K. V. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part III, ser. Lecture Notes in Computer Science, vol. 11207.   Springer, 2018, pp. 297–313.
  7. X. Zheng, J. Zhu, Y. Liu, Z. Cao, C. Fu, and L. Wang, “Both style and distortion matter: Dual-path unsupervised domain adaptation for panoramic semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1285–1295.
  8. J. Zhu, H. Bai, and L. Wang, “Patch-mix transformer for unsupervised domain adaptation: A game perspective,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3561–3571.
  9. Z. Cao, L. Ma, M. Long, and J. Wang, “Partial adversarial domain adaptation,” in Computer Vision - ECCV 2018 - 15th European Conference, ser. Lecture Notes in Computer Science, vol. 11212.   Springer, 2018, pp. 139–155.
  10. P. Guo, J. Zhu, and Y. Zhang, “Selective partial domain adaptation.” in BMVC, 2022, p. 420.
  11. K. Saito, S. Yamamoto, Y. Ushiku, and T. Harada, “Open set domain adaptation by backpropagation,” in Computer Vision - ECCV 2018 - 15th European Conference, ser. Lecture Notes in Computer Science, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., vol. 11209, 2018, pp. 156–171.
  12. K. You, M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Universal domain adaptation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2019, pp. 2720–2729.
  13. M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in Proceedings of the 32nd International Conference on Machine Learning, ser. JMLR Workshop and Conference Proceedings, vol. 37.   JMLR.org, 2015, pp. 97–105.
  14. M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep transfer learning with joint adaptation networks,” in Proceedings of the 34th International Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research, vol. 70.   PMLR, 2017, pp. 2208–2217.
  15. W. Zellinger, T. Grubinger, E. Lughofer, T. Natschläger, and S. Saminger-Platz, “Central moment discrepancy (CMD) for domain-invariant representation learning,” in 5th International Conference on Learning Representations, ICLR.   OpenReview.net, 2017.
  16. B. Sun and K. Saenko, “Deep CORAL: correlation alignment for deep domain adaptation,” in Computer Vision - ECCV 2016 Workshops - Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III, ser. Lecture Notes in Computer Science, vol. 9915, 2016, pp. 443–450.
  17. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. S. Lempitsky, “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, pp. 59:1–59:35, 2016.
  18. J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in Proceedings of the 35th International Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research, vol. 80.   PMLR, 2018, pp. 1994–2003.
  19. Z. Cao, L. Ma, M. Long, and J. Wang, “Partial adversarial domain adaptation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 135–150.
  20. J. Zhang, Z. Ding, W. Li, and P. Ogunbona, “Importance weighted adversarial nets for partial domain adaptation,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE Computer Society, 2018, pp. 8156–8164.
  21. Z. Cao, M. Long, J. Wang, and M. I. Jordan, “Partial transfer learning with selective adversarial networks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018.   Computer Vision Foundation / IEEE Computer Society, 2018, pp. 2724–2732.
  22. Z. Cao, K. You, M. Long, J. Wang, and Q. Yang, “Learning to transfer examples for partial domain adaptation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2019, pp. 2985–2994.
  23. S. Li, C. H. Liu, Q. Lin, Q. Wen, L. Su, G. Huang, and Z. Ding, “Deep residual correction network for partial domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 7, pp. 2329–2344, 2021.
  24. S. Bucci, M. R. Loghmani, and T. Tommasi, “On the effectiveness of image rotation for open set domain adaptation,” in Computer Vision - ECCV 2020 - 16th European Conference, ser. Lecture Notes in Computer Science, vol. 12361.   Springer, 2020, pp. 422–438.
  25. B. Fu, Z. Cao, M. Long, and J. Wang, “Learning to detect open classes for universal domain adaptation,” in Computer Vision - ECCV 2020 - 16th European Conference, ser. Lecture Notes in Computer Science, vol. 12360.   Springer, 2020, pp. 567–583.
  26. K. Saito, D. Kim, S. Sclaroff, and K. Saenko, “Universal domain adaptation through self supervision,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 2020.
  27. L. V. Kantorovich, “On the translocation of masses,” Journal of mathematical sciences, vol. 133, no. 4, pp. 1381–1382, 2006.
  28. N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy, “Optimal transport for domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 9, pp. 1853–1865, 2017.
  29. N. Courty, R. Flamary, A. Habrard, and A. Rakotomamonjy, “Joint distribution optimal transportation for domain adaptation,” in NIPS, 2017.
  30. I. Redko, A. Habrard, and M. Sebban, “Theoretical analysis of domain adaptation with optimal transport,” ArXiv, vol. abs/1610.04420, 2016.
  31. I. Redko, N. Courty, R. Flamary, and D. Tuia, “Optimal transport for multi-source domain adaptation under target shift,” in International Conference on Artificial Intelligence and Statistics, 2018.
  32. Y. Yan, W. Li, H. Wu, H. Min, M. Tan, and Q. Wu, “Semi-supervised optimal transport for heterogeneous domain adaptation,” in International Joint Conference on Artificial Intelligence, 2018.
  33. B. Xu, Z. Zeng, C. Lian, and Z. Ding, “Few-shot domain adaptation via mixup optimal transport,” IEEE Transactions on Image Processing, vol. 31, pp. 2518–2528, 2022.
  34. I. Redko, A. Habrard, and M. Sebban, “Theoretical analysis of domain adaptation with optimal transport,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part II 10.   Springer, 2017, pp. 737–753.
  35. B. B. Damodaran, B. Kellenberger, R. Flamary, D. Tuia, and N. Courty, “Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation,” in Computer Vision - ECCV 2018 - 15th European Conference, ser. Lecture Notes in Computer Science, vol. 11208.   Springer, 2018, pp. 467–483.
  36. R. Xu, P. Liu, L. Wang, C. Chen, and J. Wang, “Reliable weighted optimal transport for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2020, pp. 4393–4402.
  37. J. Shen, Y. Qu, W. Zhang, and Y. Yu, “Wasserstein distance guided representation learning for domain adaptation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  38. K. Fatras, T. S’ejourn’e, N. Courty, and R. Flamary, “Unbalanced minibatch optimal transport; applications to domain adaptation,” ArXiv, vol. abs/2103.03606, 2021.
  39. N. Courty, R. Flamary, and D. Tuia, “Domain adaptation with regularized optimal transport,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I 14.   Springer, 2014, pp. 274–289.
  40. C.-Y. Lee, T. Batra, M. H. Baig, and D. Ulbricht, “Sliced wasserstein discrepancy for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 10 285–10 295.
  41. R. Xu, P. Liu, Y. Zhang, F. Cai, J. Wang, S. Liang, H. Ying, and J. Yin, “Joint partial optimal transport for open set domain adaptation.” in IJCAI, 2020, pp. 2540–2546.
  42. B. Colson, P. Marcotte, and G. Savard, “An overview of bilevel optimization,” Ann. Oper. Res., vol. 153, no. 1, pp. 235–256, 2007.
  43. L. A. Caffarelli and R. J. McCann, “Free boundaries in optimal transport and monge-ampere obstacle problems,” Annals of mathematics, pp. 673–730, 2010.
  44. L. Chapel, M. Z. Alaya, and G. Gasso, “Partial optimal transport with applications on positive-unlabeled learning,” arXiv preprint arXiv:2002.08276, 2020.
  45. K. Nguyen, D. Nguyen, T. Pham, N. Ho et al., “Improving mini-batch optimal transport via partial transportation,” in International Conference on Machine Learning.   PMLR, 2022, pp. 16 656–16 690.
  46. I. Redko, A. Habrard, and M. Sebban, “Theoretical analysis of domain adaptation with optimal transport,” in Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2017, ser. Lecture Notes in Computer Science, vol. 10535, 2017, pp. 737–753.
  47. K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in Computer Vision - ECCV 2010, 11th European Conference on Computer Vision, ser. Lecture Notes in Computer Science, vol. 6314.   Springer, 2010, pp. 213–226.
  48. H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017.   IEEE Computer Society, 2017, pp. 5385–5394.
  49. X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “Visda: The visual domain adaptation challenge,” CoRR, vol. abs/1710.06924, 2017.
  50. H. Liu, Z. Cao, M. Long, J. Wang, and Q. Yang, “Separate to adapt: Open set domain adaptation via progressive separation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2019, pp. 2927–2936.
  51. P. P. Busto, A. Iqbal, and J. Gall, “Open set domain adaptation for image and action recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 413–429, 2020.
  52. M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016, pp. 136–144.
  53. M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, ser. JMLR Workshop and Conference Proceedings, vol. 37.   JMLR.org, 2015, pp. 97–105.
  54. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The journal of machine learning research, vol. 17, no. 1, pp. 2096–2030, 2016.
  55. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   IEEE Computer Society, 2017, pp. 2962–2971.
  56. Z. Chen, C. Chen, Z. Cheng, B. Jiang, K. Fang, and X. Jin, “Selective transfer with reinforced transfer network for partial domain adaptation,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2020, pp. 12 703–12 711.
  57. J. Liang, Y. Wang, D. Hu, R. He, and J. Feng, “A balanced and uncertainty-aware approach for partial domain adaptation,” in Computer Vision–ECCV 2020: 16th European Conference.   Springer, 2020, pp. 123–140.
  58. G. Li, G. Kang, Y. Zhu, Y. Wei, and Y. Yang, “Domain consensus clustering for universal domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9757–9766.
  59. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR, 2015.
  60. J. N. Kundu, N. Venkat, R. M. V., and R. V. Babu, “Universal source-free domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR.   Computer Vision Foundation / IEEE, 2020, pp. 4543–4552.
  61. L. P. Jain, W. J. Scheirer, and T. E. Boult, “Multi-class open set recognition using probability of inclusion,” in Computer Vision - ECCV 2014 - 13th European Conference, ser. Lecture Notes in Computer Science, vol. 8691.   Springer, 2014, pp. 393–409.
  62. E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep domain confusion: Maximizing for domain invariance,” CoRR, vol. abs/1412.3474, 2014.
  63. Y. Ganin and V. S. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proceedings of the 32nd International Conference on Machine Learning, ICML, ser. JMLR Workshop and Conference Proceedings, vol. 37.   JMLR.org, 2015, pp. 1180–1189.
  64. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition,” in Proceedings of the 31th International Conference on Machine Learning, ICML, ser. JMLR Workshop and Conference Proceedings, vol. 32.   JMLR.org, 2014, pp. 647–655.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jinjing Zhu (15 papers)
  2. Feiyang Ye (17 papers)
  3. Qiao Xiao (14 papers)
  4. Pengxin Guo (13 papers)
  5. Yu Zhang (1400 papers)
  6. Qiang Yang (202 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.