GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning (2310.07365v3)
Abstract: Graph-structured data is ubiquitous in the world which models complex relationships between objects, enabling various Web applications. Daily influxes of unlabeled graph data on the Web offer immense potential for these applications. Graph self-supervised algorithms have achieved significant success in acquiring generic knowledge from abundant unlabeled graph data. These pre-trained models can be applied to various downstream Web applications, saving training time and improving downstream (target) performance. However, different graphs, even across seemingly similar domains, can differ significantly in terms of attribute semantics, posing difficulties, if not infeasibility, for transferring the pre-trained models to downstream tasks. Concretely speaking, for example, the additional task-specific node information in downstream tasks (specificity) is usually deliberately omitted so that the pre-trained representation (transferability) can be leveraged. The trade-off as such is termed as "transferability-specificity dilemma" in this work. To address this challenge, we introduce an innovative deployment module coined as GraphControl, motivated by ControlNet, to realize better graph domain transfer learning. Specifically, by leveraging universal structural pre-trained models and GraphControl, we align the input space across various graphs and incorporate unique characteristics of target data as conditional inputs. These conditions will be progressively integrated into the model during fine-tuning or prompt tuning through ControlNet, facilitating personalized deployment. Extensive experiments show that our method significantly enhances the adaptability of pre-trained models on target attributed datasets, achieving 1.4-3x performance gain. Furthermore, it outperforms training-from-scratch methods on target data with a comparable margin and exhibits faster convergence.
- Group formation in large social networks: membership, growth, and evolution. In Proc. of KDD.
- Vladimir Batagelj. 2003. Efficient algorithms for citation network analysis. arXiv preprint cs/0309023 (2003).
- Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. In Proc. of ICLR.
- Chen Cai and Yusu Wang. 2020. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318 (2020).
- Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
- Adaptive graph encoder for attributed graph embedding. In Proc. of KDD.
- Prompt tuning for graph neural networks. arXiv preprint arXiv:2209.15240 (2022).
- Victor Garcia and Joan Bruna. 2017. Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043 (2017).
- Bootstrap your own latent-a new approach to self-supervised learning. Proc. of NeurIPS (2020).
- Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proc. of KDD.
- Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In Proc. of ICML.
- Graphmae: Self-supervised masked graph autoencoders. In Proc. of KDD.
- Variational diffusion models. Proc. of NeurIPS (2021).
- Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. NIPS Workshop on Bayesian Deep Learning (2016).
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences (2017).
- Learning the kernel matrix with semidefinite programming. Journal of Machine learning research (2004).
- Transfer learning for deep learning on graph-structured data. In Proc. of AAAI.
- Overcoming catastrophic forgetting in graph neural networks. In Proc. of AAAI.
- Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In Proceedings of the ACM Web Conference 2023.
- Image-based recommendations on styles and substitutes. In Proc. of SIGIR.
- Mark EJ Newman and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical review E (2004).
- Alleviate Representation Overlapping in Class Incremental Learning by Contrastive Class Concentration. CoRR (2021).
- Continual Vision-Language Representation Learning with Off-Diagonal Information. In Proc. of ICML.
- Recommendation quality, transparency, and website quality for trust-building in recommendation agents. Electronic Commerce Research and Applications (2016).
- Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
- Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering (2009).
- Gcc: Graph contrastive coding for graph neural network pre-training. In Proc. of KDD.
- struc2vec: Learning node representations from structural identity. In Proc. of KDD.
- Martin Riedmiller. 1994. Advanced supervised learning in multi-layer perceptrons—from backpropagation to adaptive learning algorithms. Computer Standards & Interfaces (1994).
- A scalable permutation approach reveals replication and preservation patterns of network modules in large datasets. Cell systems (2016).
- High-resolution image synthesis with latent diffusion models. In Proc. of CVPR.
- Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017).
- Collective classification in network data. AI magazine (2008).
- Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018).
- How to fine-tune bert for text classification?. In Proc. of CCL.
- Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019).
- GPPT: Graph Pre-Training and Prompt Tuning to Generalize Graph Neural Networks. In Proc. of KDD.
- All in One: Multi-Task Prompting for Graph Neural Networks. In Proc. of KDD.
- Deep Graph Infomax. In Proc. of ICLR.
- Unsupervised domain adaptive graph convolutional networks. In Proc. of WWW.
- Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022.
- How Powerful are Graph Neural Networks?. In Proc. of ICLR.
- Design space for graph neural networks. Proc. of NeurIPS (2020).
- Universal domain adaptation. In Proc. of CVPR.
- Graph contrastive learning automated. In Proc. of ICML.
- Graph contrastive learning with augmentations. Proc. of NeurIPS (2020).
- Oag: Toward linking large-scale heterogeneous entity graphs. In Proc. of KDD.
- Lvmin Zhang and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543 (2023).
- COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive Learning. In Proc. of KDD.
- Transfer learning of graph neural networks with ego-graph information maximization. Proc. of NeurIPS (2021).
- SGL-PT: A Strong Graph Learner with Graph Prompt Tuning. arXiv preprint arXiv:2302.12449 (2023).
- RoSA: A Robust Self-Aligned Framework for Node-Node Graph Contrastive Learning. In Proc. of IJCAI.
- MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning. arXiv preprint arXiv:2307.13055 (2023).
- Deep Graph Contrastive Representation Learning. In ICML Workshop on Graph Representation Learning and Beyond.
- A comprehensive survey on transfer learning. Proc. IEEE (2020).
- Yun Zhu (52 papers)
- Yaoke Wang (8 papers)
- Haizhou Shi (25 papers)
- Zhenshuo Zhang (3 papers)
- Dian Jiao (10 papers)
- Siliang Tang (116 papers)