Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spectral Augmentation for Self-Supervised Learning on Graphs (2210.00643v2)

Published 2 Oct 2022 in cs.LG and cs.AI

Abstract: Graph contrastive learning (GCL), as an emerging self-supervised learning technique on graphs, aims to learn representations via instance discrimination. Its performance heavily relies on graph augmentation to reflect invariant patterns that are robust to small perturbations; yet it still remains unclear about what graph invariance GCL should capture. Recent studies mainly perform topology augmentations in a uniformly random manner in the spatial domain, ignoring its influence on the intrinsic structural properties embedded in the spectral domain. In this work, we aim to find a principled way for topology augmentations by exploring the invariance of graphs from the spectral perspective. We develop spectral augmentation which guides topology augmentations by maximizing the spectral change. Extensive experiments on both graph and node classification tasks demonstrate the effectiveness of our method in self-supervised representation learning. The proposed method also brings promising generalization capability in transfer learning, and is equipped with intriguing robustness property under adversarial attacks. Our study sheds light on a general principle for graph topology augmentation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Graph barlow twins: A self-supervised representation learning framework for graphs. arXiv preprint arXiv:2106.02466, 2021.
  2. Beyond low-frequency information in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  3950–3957, 2021.
  3. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning, pp. 695–704. PMLR, 2019.
  4. A restricted black-box adversarial framework towards attacking graph embedding models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp.  3389–3396, 2020.
  5. Not all low-pass filters are robust in graph convolutional networks. Advances in Neural Information Processing Systems, 34, 2021a.
  6. Spectral graph attention network with fast eigen-approximation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp.  2905–2909, 2021b.
  7. Testing graph clusterability: Algorithms and lower bounds. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp.  497–508. IEEE, 2018.
  8. Spectral graph theory. Number 92. American Mathematical Soc., 1997.
  9. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.
  10. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29:3844–3852, 2016.
  11. Data augmentation for deep graph learning: A survey. ACM SIGKDD Explorations Newsletter, 24(2):61–77, 2022.
  12. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28, 2015.
  13. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp.  169–177, 2020.
  14. Adversarial graph contrastive learning with information regularization. arXiv preprint arXiv:2202.06491, 2022.
  15. Chris D Godsil. On the full automorphism group of a graph. Combinatorica, 1(3):243–256, 1981.
  16. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268–276, 2018.
  17. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems, 33:21271–21284, 2020.
  18. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
  19. Graph diffusion distance: A difference measure for weighted graphs based on the graph laplacian exponential kernel. In 2013 IEEE Global Conference on Signal and Information Processing, pp.  419–422, 2013. doi: 10.1109/GlobalSIP.2013.6736904.
  20. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning, pp. 4116–4126. PMLR, 2020.
  21. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  9729–9738, 2020.
  22. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
  23. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020a.
  24. Strategies for pre-training graph neural networks. In International Conference on Learning Representations, 2020b. URL https://openreview.net/forum?id=HJlWWJSFDH.
  25. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2020c.
  26. Sub-graph contrast for scalable self-supervised graph representation learning. In 2020 IEEE International Conference on Data Mining (ICDM), pp.  222–231. IEEE, 2020.
  27. Self-supervised learning on graphs: Deep insights and new direction, 2020a.
  28. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  66–74, 2020b.
  29. Nabil Kahale. Eigenvalues and expansion of regular graphs. J. ACM, 42(5):1091–1106, sep 1995. ISSN 0004-5411. doi: 10.1145/210118.210136. URL https://doi.org/10.1145/210118.210136.
  30. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
  31. Multiway spectral partitioning and higher-order cheeger inequalities. Journal of the ACM (JACM), 61(6):1–30, 2014.
  32. Augmentation-free self-supervised learning on graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp.  7372–7380, 2022.
  33. Representation learning for networks in biology and medicine: Advancements, challenges, and opportunities. CoRR, abs/2104.04883, 2021.
  34. Graph structural attack by spectral distance, 2021. URL https://arxiv.org/abs/2111.00684.
  35. Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprint arXiv:2109.05641, 2021.
  36. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
  37. Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901, 2020.
  38. Matched filtering for subgraph detection in dynamic networks. In 2011 IEEE Statistical Signal Processing Workshop (SSP), pp.  509–512. IEEE, 2011.
  39. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020.
  40. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 14, 2001.
  41. The lanczos algorithm with selective orthogonalization. Mathematics of computation, 33(145):217–238, 1979.
  42. Graph representation learning via graphical mutual information maximization. In Proceedings of The Web Conference 2020, pp.  259–270, 2020.
  43. On variational bounds of mutual information. In International Conference on Machine Learning, pp. 5171–5180. PMLR, 2019.
  44. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  1150–1160, 2020.
  45. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp.  385–394, 2017.
  46. Lynn C Rogers. Derivatives of eigenvalues and eigenvectors. AIAA journal, 8(5):943–944, 1970.
  47. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Hkx1qkrKPr.
  48. Flow smoothing and denoising: Graph signal processing in the edge-space. In 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp.  735–739. IEEE, 2018.
  49. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  50. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018.
  51. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, 2019.
  52. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34, 2021.
  53. Bootstrapped representation learning on graphs. In ICLR 2021 Workshop on Geometrical and Topological Representation Learning, 2021.
  54. What makes for good views for contrastive learning? Advances in Neural Information Processing Systems, 33:6827–6839, 2020.
  55. The information bottleneck method. arXiv preprint physics/0004057, 2000.
  56. Representation learning with contrastive predictive coding. arXiv e-prints, pp.  arXiv–1807, 2018.
  57. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. accepted as poster.
  58. Deep Graph Infomax. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rklz9iAcKQ.
  59. Deep graph infomax. ICLR (Poster), 2(3):4, 2019.
  60. Stability and generalization of graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  1539–1548, 2019.
  61. Community detection from low-rank excitations of a graph filter. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp.  4044–4048. IEEE, 2018.
  62. Augmentation-free graph contrastive learning. arXiv preprint arXiv:2204.04874, 2022.
  63. Hiding individuals and communities in a social network. Nature Human Behaviour, 2(2):139–147, 2018.
  64. Graph laplacian regularization for large-scale semidefinite programming. Advances in neural information processing systems, 19, 2006.
  65. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km.
  66. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
  67. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33:5812–5823, 2020a.
  68. When does self-supervision help graph convolutional networks? In international conference on machine learning, pp. 10871–10880. PMLR, 2020b.
  69. Graph contrastive learning automated. In International Conference on Machine Learning, pp. 12121–12132. PMLR, 2021.
  70. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pp. 12310–12320. PMLR, 2021.
  71. GraphSAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJe8pkHFwS.
  72. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  1399–1407, 2019.
  73. Deep Graph Contrastive Representation Learning. In ICML Workshop on Graph Representation Learning and Beyond, 2020. URL http://arxiv.org/abs/2006.04131.
  74. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021, WWW ’21, pp. 2069–2080, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. doi: 10.1145/3442381.3449802. URL https://doi.org/10.1145/3442381.3449802.
  75. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190–i198, 2017.
  76. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations (ICLR), 2019.
Citations (40)

Summary

We haven't generated a summary for this paper yet.