Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Source Free Unsupervised Graph Domain Adaptation (2112.00955v4)

Published 2 Dec 2021 in cs.LG and cs.AI

Abstract: Graph Neural Networks (GNNs) have achieved great success on a variety of tasks with graph-structural data, among which node classification is an essential one. Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification. It leverages knowledge from a labeled graph (i.e., source domain) to tackle the same task on another unlabeled graph (i.e., target domain). Most existing UGDA methods heavily rely on the labeled graph in the source domain. They utilize labels from the source domain as the supervision signal and are jointly trained on both the source graph and the target graph. However, in some real-world scenarios, the source graph is inaccessible because of privacy issues. Therefore, we propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA). In this scenario, the only information we can leverage from the source domain is the well-trained source model, without any exposure to the source graph and its labels. As a result, existing UGDA methods are not feasible anymore. To address the non-trivial adaptation challenges in this practical scenario, we propose a model-agnostic algorithm called SOGA for domain adaptation to fully exploit the discriminative ability of the source model while preserving the consistency of structural proximity on the target graph. We prove the effectiveness of the proposed algorithm both theoretically and empirically. The experimental results on four cross-domain tasks show consistent improvements in the Macro-F1 score and Macro-AUC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22, 14 (2006), e49–e57.
  2. Contrastive Test-Time Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 295–305.
  3. Co-clustering based classification for out-of-domain documents. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, California, USA, August 12-15, 2007, Pavel Berkhin, Rich Caruana, and Xindong Wu (Eds.). ACM, 210–219. https://doi.org/10.1145/1281192.1281218
  4. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems 29 (2016).
  5. Learning structural node embeddings via diffusion wavelets. In Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. ACM, 1320–1329.
  6. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning. PMLR, JMLR.org, 1180–1189.
  7. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. ACM, 855–864.
  8. Inductive representation learning on large graphs. In International Conference on Neural Information Processing Systems. 1025–1035.
  9. Strategies for Pre-training Graph Neural Networks. In International Conference on Learning Representations. OpenReview.net.
  10. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 1857–1867.
  11. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In Association of Computational Linguistics. The Association for Computational Linguistics, 264–271.
  12. Active domain transfer on network embedding. In Proceedings of The Web Conference 2020. ACM / IW3C2, 2683–2689.
  13. Empowering graph representation learning with test-time graph transformation. arXiv preprint arXiv:2210.03561 (2022).
  14. Progressive domain adaptation from a source pre-trained model. arXiv preprint arXiv:2007.01524 (2020).
  15. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations.
  16. Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations abs/1609.02907 (2017).
  17. Meta path-based collective classification in heterogeneous information networks. In Association for Computing Machinery International Conference on Information and Knowledge Management. 1567–1571.
  18. Peer-to-peer federated learning on graphs. arXiv preprint arXiv:1901.11173 abs/1901.11173 (2019).
  19. Model adaptation: Unsupervised domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation / IEEE, 9641–9650.
  20. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning. PMLR, PMLR, 6028–6039.
  21. Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All? arXiv preprint arXiv:2306.01323 (2023).
  22. Revisiting Link Prediction: A Data Perspective. arXiv preprint arXiv:2310.00793 (2023).
  23. Least squares generative adversarial networks. In IEEE International Conference on Computer Vision. IEEE Computer Society, 2794–2802.
  24. Test-time adaptation to distribution shift by confidence maximization and input transformation. arXiv preprint arXiv:2106.14999 (2021).
  25. Deepwalk: Online learning of social representations. In Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. ACM, 701–710.
  26. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 1150–1160.
  27. struc2vec: Learning node representations from structural identity. In Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. ACM, 385–394.
  28. Asymmetric tri-training for unsupervised domain adaptation. In International Conference on Machine Learning. PMLR, PMLR, 2988–2997.
  29. Adversarial deep network embedding for cross-network node classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 2991–2999.
  30. Network together: Node classification via cross-network deep network embedding. IEEE Transactions on Neural Networks and Learning Systems 32, 5 (2020), 1935–1948.
  31. Yu Song and Donglin Wang. 2022. Learning on graphs with out-of-distribution nodes. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1635–1645.
  32. Arnetminer: extraction and mining of academic social networks. In Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. ACM, 990–998.
  33. Deep domain confusion: Maximizing for domain invariance. Computer Science abs/1412.3474 (2014).
  34. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 2605 (2008), 2579–2605.
  35. Graph attention networks. International Conference on Learning Representations abs/1710.10903 (2018).
  36. Deep Graph Infomax. ICLR (Poster) 2, 3 (2019), 4.
  37. Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10 (2017), 3152676.
  38. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726 (2020).
  39. Unsupervised domain adaptive graph convolutional networks. In The Web Conference 2020. ACM / IW3C2, 1457–1467.
  40. Federated graph classification over non-iid graphs. Advances in Neural Information Processing Systems 34 (2021).
  41. Domain Adaptive Classification on Heterogeneous Information Networks. In International Joint Conference on Artificial Intelligence. ijcai.org, 1410–1416.
  42. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Advances in Neural Information Processing Systems 34 (2021), 29393–29405.
  43. Unsupervised domain adaptation without source data by casting a bait. Computer Vision and Pattern Recognition abs/2010.12427 (2020).
  44. From canonical correlation analysis to self-supervised graph neural networks. Advances in Neural Information Processing Systems 34 (2021), 76–89.
  45. Company Competition Graph. arXiv preprint arXiv:2304.00323 (2023).
  46. Dane: Domain adaptive network embedding. International Joint Conference on Artificial Intelligence abs/1906.00684 (2019), 4362–4368.
  47. An empirical study of graph contrastive learning. arXiv preprint arXiv:2109.01116 (2021).
  48. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haitao Mao (29 papers)
  2. Lun Du (50 papers)
  3. Yujia Zheng (34 papers)
  4. Qiang Fu (159 papers)
  5. Zelin Li (11 papers)
  6. Xu Chen (413 papers)
  7. Shi Han (74 papers)
  8. Dongmei Zhang (193 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.