Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation (2405.17774v1)

Published 28 May 2024 in cs.CV

Abstract: Unsupervised domain adaptation (UDA) is a critical problem for transfer learning, which aims to transfer the semantic information from labeled source domain to unlabeled target domain. Recent advancements in UDA models have demonstrated significant generalization capabilities on the target domain. However, the generalization boundary of UDA models remains unclear. When the domain discrepancy is too large, the model can not preserve the distribution structure, leading to distribution collapse during the alignment. To address this challenge, we propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN), which achieves transfer learning from both global and local perspectives. From the global alignment standpoint, our model generates a domain-biased intermediate domain that helps preserve the distribution structures. By entangling cross-domain features, our model progressively reduces the risk of distribution collapse. However, only relying on global alignment is insufficient to preserve the distribution structure. To further enhance the inner relationships of features, we introduce the local perspective. We utilize the graph convolutional network (GCN) as an intuitive method to explore the internal relationships between features, ensuring the preservation of manifold structures and generating domain-biased prototypes. Additionally, we consider the discriminability of the inner relationships between features. We propose a pro-contrastive loss to enhance the discriminability at the prototype level by separating hard negative pairs. By incorporating both GCN and the pro-contrastive loss, our model fully explores fine-grained semantic relationships. Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
  2. S. J. Pan and Q. Yang, “A survey on transfer learning,” TKDE, 2009.
  3. E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep domain confusion: Maximizing for domain invariance,” arXiv, 2014.
  4. S. Wang and L. Zhang, “Self-adaptive re-weighted adversarial domain adaptation,” in IJCAI, pp. 3181–3187, 2021.
  5. C. Yang, B. Xue, K. C. Tan, and M. Zhang, “A co-training framework for heterogeneous heuristic domain adaptation,” TNNLS, 2022.
  6. S. Wang, Y. Chen, Z. He, X. Yang, M. Wang, Q. You, and X. Zhang, “Disentangled representation learning with causality for unsupervised domain adaptation,” in ACM MM, 2023.
  7. X. Wu, S. Zhang, Q. Zhou, Z. Yang, C. Zhao, and L. J. Latecki, “Entropy minimization versus diversity maximization for domain adaptation,” TNNLS, 2021.
  8. S. Wang, L. Zhang, W. Zuo, and B. Zhang, “Class-specific reconstruction transfer learning for visual recognition across domains,” TIP.
  9. L. Zhang, S. Wang, G.-B. Huang, W. Zuo, J. Yang, and D. Zhang, “Manifold criterion guided transfer learning via intermediate domain generation,” TNNLS, 2019.
  10. R. Zhu, X. Jiang, J. Lu, and S. Li, “Cross-domain graph convolutions for adversarial unsupervised domain adaptation,” TNNLS, 2021.
  11. Z.-G. Liu, L.-B. Ning, and Z.-W. Zhang, “A new progressive multisource domain adaptation network with weighted decision fusion,” TNNLS, 2024.
  12. Z. Piao and L. T. Baojun Zhao, “Unsupervised domain-adaptive object detection via localization regression alignment,” TNNLS, 2023.
  13. S. Li, F. L. Xinbo Gao, J. L. Huafeng Li, and D. T. Bob Zhang, “Logical relation inference and multiview information interaction for domain adaptation person re-identification,” TNNLS, 2023.
  14. M. Long, Y. Cao, J. Wang, and M. Jordan, “Learning transferable features with deep adaptation networks,” in ICML, pp. 97–105, PMLR, 2015.
  15. M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” NIPS, vol. 29, 2016.
  16. M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep transfer learning with joint adaptation networks,” in ICML, pp. 2208–2217, PMLR, 2017.
  17. F. Zhuang, X. Cheng, P. Luo, S. J. Pan, and Q. He, “Supervised representation learning: Transfer learning with deep autoencoders,” in IJCAI, 2015.
  18. J. Shen, Y. Qu, W. Zhang, and Y. Yu, “Wasserstein distance guided representation learning for domain adaptation,” in AAAI, 2018.
  19. X. Ma, T. Zhang, and C. Xu, “Gcan: Graph convolutional adversarial network for unsupervised domain adaptation,” in CVPR, 2019.
  20. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” JMLR, 2016.
  21. K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in CVPR, 2018.
  22. S. Xie, Z. Zheng, L. Chen, and C. Chen, “Learning semantic representations for unsupervised domain adaptation,” in ICML, PMLR, 2018.
  23. M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” NIPS, 2018.
  24. A. Sharma, T. Kalluri, and M. Chandraker, “Instance level affinity-based transfer for unsupervised domain adaptation,” in CVPR, 2021.
  25. G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann, “Contrastive adaptation network for unsupervised domain adaptation,” in CVPR, 2019.
  26. Y. Luo, Z. Wang, Z. Huang, and M. Baktashmotlagh, “Progressive graph learning for open-set domain adaptation,” in ICML, PMLR, 2020.
  27. S. Roy, E. Krivosheev, Z. Zhong, N. Sebe, and E. Ricci, “Curriculum graph co-teaching for multi-target domain adaptation,” in CVPR, 2021.
  28. J. Na, H. Jung, H. J. Chang, and W. Hwang, “Fixbi: Bridging domain spaces for unsupervised domain adaptation,” in CVPR, pp. 1094–1103, 2021.
  29. R. Gopalan, R. Li, and R. Chellappa, “Unsupervised adaptation across domain shifts by generating intermediate data representations,” TPAMI, 2013.
  30. Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in CVPR, 2018.
  31. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.
  32. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in ICML, PMLR, 2020.
  33. S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Mach Learn, 2010.
  34. M. Wang, S. Wang, W. Wang, L. Shen, X. Zhang, L. Lan, and Z. Luo, “Reducing bi-level feature redundancy for unsupervised domain adaptation,” PR, p. 109319, 2023.
  35. B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in ECCV, Springer, 2016.
  36. S. Wang, L. Zhang, P. Wang, M. Wang, and X. Zhang, “Bp-triplet net for unsupervised domain adaptation: A bayesian perspective,” PR, vol. 133, p. 108993, 2023.
  37. M. Wang, P. Li, L. Shen, Y. Wang, S. Wang, W. Wang, X. Zhang, J. Chen, and Z. Luo, “Informative pairs mining based adaptive metric learning for adversarial domain adaptation,” NN, vol. 151, pp. 238–249, 2022.
  38. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Commun. ACM, 2020.
  39. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” Trans. neural Netw., 2008.
  40. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv, 2016.
  41. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” NIPS, 2017.
  42. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” stat, 2017.
  43. A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv, 2017.
  44. A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv, 2018.
  45. M. Thota and G. Leontidis, “Contrastive domain adaptation,” in CVPR, 2021.
  46. J. Huang, D. Guan, A. Xiao, and S. Lu, “Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data,” NIPS, 2021.
  47. J. Huang, D. Guan, A. Xiao, S. Lu, and L. Shao, “Category contrast for unsupervised domain adaptation in visual tasks,” in CVPR, 2022.
  48. S. Li, M. Xie, K. Gong, C. H. Liu, Y. Wang, and W. Li, “Transferable semantic augmentation for domain adaptation,” in CVPR, pp. 11516–11525, 2021.
  49. A. Cichocki and S.-i. Amari, “Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities,” Entropy, 2010.
  50. S. Rashidi, R. Tennakoon, A. M. Rekavandi, P. Jessadatavornwong, A. Freis, G. Huff, M. Easton, A. Mouritz, R. Hoseinnezhad, and A. Bab-Hadiashar, “It-ruda: Information theory assisted robust unsupervised domain adaptation,” arXiv preprint arXiv:2210.12947, 2022.
  51. G. L. Gilardoni, “On pinsker’s and vajda’s type inequalities for csiszár’s f𝑓fitalic_f-divergences,” IEEE Trans. Inf. Theory, 2010.
  52. L. Chen, H. Chen, Z. Wei, X. Jin, X. Tan, Y. Jin, and E. Chen, “Reusing the task-specific classifier as a discriminator: Discriminator-free adversarial domain adaptation,” in CVPR, 2022.
  53. Y. Zhang, X. Wang, J. Liang, Z. Zhang, L. Wang, R. Jin, and T. Tan, “Free lunch for domain adversarial training: Environment label smoothing,” arXiv, 2023.
  54. T. Westfechtel, H.-W. Yeh, Q. Meng, Y. Mukuta, and T. Harada, “Backprop induced feature weighting for adversarial domain adaptation with iterative label distribution alignment,” in WACV, 2023.
  55. X. Gu, J. Sun, and Z. Xu, “Spherical space domain adaptation with robust pseudo-label loss,” in CVPR, 2020.
  56. T. Westfechtel, H.-W. Yeh, D. Zhang, and T. Harada, “Gradual source domain expansion for unsupervised domain adaptation,” in WACV, 2024.
  57. K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in ECCV, Springer, 2010.
  58. B. Caputo, H. Müller, J. Martinez-Gomez, M. Villegas, B. Acar, N. Patricia, N. Marvasti, S. Üsküdarlı, R. Paredes, M. Cazorla, et al., “Imageclef 2014: Overview and analysis of the results,” Inf. Access Eval.
  59. H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in CVPR, 2017.
  60. X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “Visda: The visual domain adaptation challenge,” arXiv, 2017.
  61. X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in ICCV, 2019.
  62. W. Wang, B. Li, M. Wang, F. Nie, Z. Wang, and H. Li, “Confidence regularized label propagation based domain adaptation,” TCSVT, 2021.
  63. S. Cui, S. Wang, J. Zhuo, L. Li, Q. Huang, and Q. Tian, “Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations,” in CVPR, 2020.
  64. C.-Y. Lee, T. Batra, M. H. Baig, and D. Ulbricht, “Sliced wasserstein discrepancy for unsupervised domain adaptation,” in CVPR, 2019.
  65. Z. Du, J. Li, H. Su, L. Zhu, and K. Lu, “Cross-domain gradient discrepancy minimization for unsupervised domain adaptation,” in CVPR, 2021.
  66. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” IJCV, 2015.
  67. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative style, high-performance deep learning library,” NIPS, 2019.
  68. R. Wang, Z. Wu, Z. Weng, J. Chen, G.-J. Qi, and Y.-G. Jiang, “Cross-domain contrastive learning for unsupervised domain adaptation,” TMM, 2022.
  69. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.,” JMLR, 2008.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shanshan Wang (166 papers)
  2. Hao Zhou (351 papers)
  3. Xun Yang (76 papers)
  4. Zhenwei He (7 papers)
  5. Mengzhu Wang (21 papers)
  6. Xingyi Zhang (33 papers)
  7. Meng Wang (1063 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com