Exploring Task Unification in Graph Representation Learning via Generative Approach (2403.14340v1)
Abstract: Graphs are ubiquitous in real-world scenarios and encompass a diverse range of tasks, from node-, edge-, and graph-level tasks to transfer learning. However, designing specific tasks for each type of graph data is often costly and lacks generalizability. Recent endeavors under the "Pre-training + Fine-tuning" or "Pre-training + Prompt" paradigms aim to design a unified framework capable of generalizing across multiple graph tasks. Among these, graph autoencoders (GAEs), generative self-supervised models, have demonstrated their potential in effectively addressing various graph tasks. Nevertheless, these methods typically employ multi-stage training and require adaptive designs, which on one hand make it difficult to be seamlessly applied to diverse graph tasks and on the other hand overlook the negative impact caused by discrepancies in task objectives between the different stages. To address these challenges, we propose GA2E, a unified adversarially masked autoencoder capable of addressing the above challenges seamlessly. Specifically, GA2E proposes to use the subgraph as the meta-structure, which remains consistent across all graph tasks (ranging from node-, edge-, and graph-level to transfer learning) and all stages (both during training and inference). Further, GA2E operates in a \textbf{"Generate then Discriminate"} manner. It leverages the masked GAE to reconstruct the input subgraph whilst treating it as a generator to compel the reconstructed graphs resemble the input subgraph. Furthermore, GA2E introduces an auxiliary discriminator to discern the authenticity between the reconstructed (generated) subgraph and the input subgraph, thus ensuring the robustness of the graph representation through adversarial training mechanisms. We validate GA2E's capabilities through extensive experiments on 21 datasets across four types of graph tasks.
- Subgraph neural networks. Advances in Neural Information Processing Systems 33 (2020), 8017–8029.
- NetGAN: Generating Graphs via Random Walks. arXiv:1803.00816 [stat.ML]
- Mining hidden community in heterogeneous social networks. In Proceedings of the 3rd international workshop on Link discovery. 58–65.
- When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective! arXiv preprint arXiv:2303.16458 (2023).
- Adversarial Network Embedding. arXiv:1711.07838 [cs.LG]
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
- Understanding and extending subgraph gnns by rethinking their symmetries. Advances in Neural Information Processing Systems 35 (2022), 31376–31390.
- Generative adversarial nets. Advances in neural information processing systems 27 (2014).
- Semi-Implicit Graph Variational Auto-Encoders. arXiv:1908.07078 [cs.LG]
- Semi-implicit graph variational auto-encoders. Advances in neural information processing systems 32 (2019).
- Kaveh Hassani and Amir Hosein Khasahmadi. 2020a. Contrastive multi-view representation learning on graphs. In International conference on machine learning. PMLR, 4116–4126.
- Kaveh Hassani and Amir Hosein Khasahmadi. 2020b. Contrastive multi-view representation learning on graphs. In International conference on machine learning. PMLR, 4116–4126.
- Masked autoencoders are scalable vision learners. 2022 IEEE. In CVF Conference on Computer Vision and Pattern Recognition (CVPR). 15979–15988.
- Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670 (2018).
- GraphMAE: Self-Supervised Masked Graph Autoencoders. arXiv:2205.10803 [cs.LG]
- Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594–604.
- Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1857–1867.
- Kexin Huang and Marinka Zitnik. 2020. Graph meta learning via local subgraphs. Advances in neural information processing systems 33 (2020), 5862–5874.
- A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR) 54, 8 (2021), 1–49.
- Application of deep learning methods in biological networks. Briefings in bioinformatics 22, 2 (2021), 1902–1917.
- Thomas N Kipf and Max Welling. 2016a. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
- Thomas N Kipf and Max Welling. 2016b. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
- Thomas N. Kipf and Max Welling. 2016c. Variational Graph Auto-Encoders. arXiv:1611.07308 [stat.ML]
- A Robust and Generalized Framework for Adversarial Graph Embedding. IEEE Transactions on Knowledge and Data Engineering 35, 11 (2023), 11004–11018. https://doi.org/10.1109/TKDE.2023.3235944
- What’s Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1268–1279.
- What’s Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders. arXiv:2205.10053 [cs.LG]
- SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking. arXiv:2301.12458 [cs.LG]
- SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking. In Proceedings of the ACM Web Conference 2023. 143–153.
- One for All: Towards Training One Graph Model for All Classification Tasks. arXiv preprint arXiv:2310.00149 (2023).
- Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35.
- Graph self-supervised learning: A survey. IEEE Transactions on Knowledge and Data Engineering 35, 6 (2022), 5879–5900.
- Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In Proceedings of the ACM Web Conference 2023. 417–428.
- GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks. In Proceedings of the ACM Web Conference 2023 (Austin, TX, USA) (WWW ’23). Association for Computing Machinery, New York, NY, USA, 417–428. https://doi.org/10.1145/3543507.3583386
- László Lovász. 1993. Random walks on graphs. Combinatorics, Paul erdos is eighty 2, 1-46 (1993), 4.
- Co-embedding attributed networks. In Proceedings of the twelfth ACM international conference on web search and data mining. 393–401.
- Adversarially Regularized Graph Autoencoder for Graph Embedding. arXiv:1802.04407 [cs.LG]
- Symmetric graph convolutional autoencoder for unsupervised graph representation learning. In Proceedings of the IEEE/CVF international conference on computer vision. 6519–6528.
- Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1150–1160.
- Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12, 9 (2011).
- A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering 29, 1 (2016), 17–37.
- Teague Sterling and John J Irwin. 2015. ZINC 15–ligand discovery for everyone. Journal of chemical information and modeling 55, 11 (2015), 2324–2337.
- InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. In International Conference on Learning Representations.
- Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1717–1727.
- All in One: Multi-Task Prompting for Graph Neural Networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (¡conf-loc¿, ¡city¿Long Beach¡/city¿, ¡state¿CA¡/state¿, ¡country¿USA¡/country¿, ¡/conf-loc¿) (KDD ’23). Association for Computing Machinery, New York, NY, USA, 2120–2131. https://doi.org/10.1145/3580305.3599256
- S2GAE: Self-Supervised Graph Autoencoders Are Generalizable Learners with Graph Masking (WSDM ’23). 787–795.
- Heterogeneous Graph Masked Autoencoders. arXiv:2208.09957 [cs.LG]
- Graph attention networks. stat 1050, 20 (2017), 10–48550.
- Deep graph infomax. arXiv preprint arXiv:1809.10341 (2018).
- Deep Graph Infomax. In International Conference on Learning Representations. https://openreview.net/forum?id=rklz9iAcKQ
- GraphGAN: Graph Representation Learning with Generative Adversarial Nets. arXiv:1711.08267 [cs.LG]
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
- Graph neural networks in recommender systems: a survey. Comput. Surveys 55, 5 (2022), 1–37.
- A survey of graph prompting methods: techniques, applications, and challenges. arXiv preprint arXiv:2303.07275 (2023).
- Graph convolutional networks with markov random field reasoning for social spammer detection. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 1054–1061.
- MoleculeNet: a benchmark for molecular machine learning. Chemical science 9, 2 (2018), 513–530.
- Towards effective and generalizable fine-tuning for pre-trained molecular graph models. bioRxiv (2022), 2022–02.
- A survey of pretraining on graphs: Taxonomy, methods, and applications. arXiv preprint arXiv:2202.07893 (2022).
- Infogcl: Information-aware graph contrastive learning. Advances in Neural Information Processing Systems 34 (2021), 30414–30425.
- Self-supervised graph-level representation learning with local and global structure. In International Conference on Machine Learning. PMLR, 11548–11558.
- Pinar Yanardag and S.V.N. Vishwanathan. 2015a. Deep Graph Kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 1365–1374. https://doi.org/10.1145/2783258.2783417
- Pinar Yanardag and SVN Vishwanathan. 2015b. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1365–1374.
- Graph contrastive learning automated. In International Conference on Machine Learning. PMLR, 12121–12132.
- Graph contrastive learning with augmentations. Advances in neural information processing systems 33 (2020), 5812–5823.
- Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs. arXiv:2311.15317 [cs.LG]
- From canonical correlation analysis to self-supervised graph neural networks. Advances in Neural Information Processing Systems 34 (2021), 76–89.
- GCGAN: Generative Adversarial Nets with Graph CNN for Network-Scale Traffic Prediction. 2019 International Joint Conference on Neural Networks (IJCNN) (2019), 1–8. https://api.semanticscholar.org/CorpusID:203605505
- Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
- Yulan Hu (13 papers)
- Sheng Ouyang (13 papers)
- Zhirui Yang (6 papers)
- Ge Chen (64 papers)
- Junchen Wan (7 papers)
- Xiao Wang (507 papers)
- Yong Liu (721 papers)