Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Harmony: Denoising and Nuclear-Norm Wasserstein Adaptation for Enhanced Domain Transfer in Graph-Structured Data (2301.12361v2)

Published 29 Jan 2023 in cs.LG

Abstract: Graph-structured data can be found in numerous domains, yet the scarcity of labeled instances hinders its effective utilization of deep learning in many scenarios. Traditional unsupervised domain adaptation (UDA) strategies for graphs primarily hinge on adversarial learning and pseudo-labeling. These approaches fail to effectively leverage graph discriminative features, leading to class mismatching and unreliable label quality. To navigate these obstacles, we develop the Denoising and Nuclear-Norm Wasserstein Adaptation Network (DNAN). DNAN employs the Nuclear-norm Wasserstein discrepancy (NWD), which can simultaneously achieve domain alignment and class distinguishment. DANA also integrates a denoising mechanism via a variational graph autoencoder that mitigates data noise. This denoising mechanism helps capture essential features of both source and target domains, improving the robustness of the domain adaptation process. Our comprehensive experiments demonstrate that DNAN outperforms state-of-the-art methods on standard UDA benchmarks for graph classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Friends and neighbors on the web. Social networks, 25(3):211–230, 2003.
  2. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
  3. Group formation in large social networks: membership, growth, and evolution. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  44–54, 2006.
  4. An o (m) algorithm for cores decomposition of networks. arXiv preprint cs/0310049, 2003.
  5. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
  6. Phillip Bonacich. Power and centrality: A family of measures. American journal of sociology, 92(5):1170–1182, 1987.
  7. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015.
  8. Graph domain adaptation: A generative view. arXiv preprint arXiv:2106.07482, 2021.
  9. Reusing the task-specific classifier as a discriminator: Discriminator-free adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  7181–7190, 2022.
  10. Fan RK Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997.
  11. Optimal transport for domain adaptation. IEEE TPAMI, 39(9):1853–1865, 2017a.
  12. Joint distribution optimal transportation for domain adaptation. Advances in neural information processing systems, 30, 2017b.
  13. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  3941–3950, 2020.
  14. Fast batch nuclear-norm maximization and minimization for robust domain adaptation. arXiv preprint arXiv:2107.06154, 2021.
  15. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. arXiv preprint arXiv:1803.10081, 2018.
  16. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
  17. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp.  1180–1189. PMLR, 2015.
  18. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  19. Gradient distribution alignment certificates better adversarial domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  8937–8946, 2021.
  20. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pp.  597–613. Springer, 2016.
  21. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  22. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  855–864, 2016.
  23. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017a.
  24. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017b.
  25. Diva: Domain invariant variational autoencoders. In Medical Imaging with Deep Learning, pp.  322–348. PMLR, 2020.
  26. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4893–4902, 2019.
  27. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  28. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016a.
  29. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016b.
  30. Looking back at labels: A class based domain adaptation technique. In 2019 international joint conference on neural networks (IJCNN), pp.  1–8. IEEE, 2019.
  31. Unsupervised domain adaptation with residual transfer networks. Advances in neural information processing systems, 29, 2016.
  32. The variational fair autoencoder. arXiv preprint arXiv:1511.00830, 2015.
  33. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
  34. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
  35. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp.  8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
  36. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  701–710, 2014.
  37. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  2110–2119, 2018.
  38. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pp.  435–446. Springer, 2011.
  39. Neural unsupervised domain adaptation in nlp—a survey. In Proceedings of the 28th International Conference on Computational Linguistics, pp.  6838–6855, 2020.
  40. A closer look at smoothness in domain adversarial training. In Proceedings of the 39th International Conference on Machine Learning, 2022.
  41. Theoretical analysis of domain adaptation with optimal transport. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part II 10, pp.  737–753. Springer, 2017.
  42. Mohammad Rostami. Lifelong domain adaptation via consolidated internal distribution. Advances in neural information processing systems, 34:11172–11183, 2021.
  43. Mohammad Rostami. Increasing model generalizability for unsupervised visual domain adaptation. In Conference on Lifelong Learning Agents, pp.  281–293. PMLR, 2022.
  44. Cognitively inspired learning of incremental drifting concepts. In International Joint Conference on Artificial Intelligence, pp.  3058–3066, 2023a.
  45. Overcoming concept shift in domain-aware settings through consolidated internal distributions. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp.  9623–9631, 2023b.
  46. Deep transfer learning for few-shot sar image classification. Remote Sensing, 11(11):1374, 2019.
  47. Domain adaptation for sentiment analysis using robust internal representations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp.  11484–11498, 2023.
  48. Discriminative adversarial domain adaptation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp.  5940–5947, 2020.
  49. Structural diversity in social contagion. Proceedings of the national academy of sciences, 109(16):5962–5966, 2012.
  50. Graph attention networks. stat, 1050(20):10–48550, 2017.
  51. Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
  52. Cédric Villani et al. Optimal transport: old and new, volume 338. Springer, 2009.
  53. Transferable attention for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp.  5345–5352, 2019.
  54. Collective dynamics of ‘small-world’networks. nature, 393(6684):440–442, 1998.
  55. Toalign: task-oriented alignment for unsupervised domain adaptation. Advances in Neural Information Processing Systems, 34:13834–13846, 2021.
  56. Backprop induced feature weighting for adversarial domain adaptation with iterative label distribution alignment. Winter Conference on Applications of Computer Vision (WACV), 2023.
  57. Unsupervised domain adaptive graph convolutional networks. In Proceedings of The Web Conference 2020, pp.  1457–1467, 2020.
  58. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp.  1365–1374, 2015.
  59. Dane: Domain adaptive network embedding. arXiv preprint arXiv:1906.00684, 2019a.
  60. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pp.  7404–7413. PMLR, 2019b.
Citations (3)

Summary

We haven't generated a summary for this paper yet.