Subgraph-level Universal Prompt Tuning (2402.10380v1)
Abstract: In the evolving landscape of machine learning, the adaptation of pre-trained models through prompt tuning has become increasingly prominent. This trend is particularly observable in the graph domain, where diverse pre-training strategies present unique challenges in developing effective prompt-based tuning methods for graph neural networks. Previous approaches have been limited, focusing on specialized prompting functions tailored to models with edge prediction pre-training tasks. These methods, however, suffer from a lack of generalizability across different pre-training strategies. Recently, a simple prompt tuning method has been designed for any pre-training strategy, functioning within the input graph's feature space. This allows it to theoretically emulate any type of prompting function, thereby significantly increasing its versatility for a range of downstream applications. Nevertheless, the capacity of such simple prompts to fully grasp the complex contexts found in graphs remains an open question, necessitating further investigation. Addressing this challenge, our work introduces the Subgraph-level Universal Prompt Tuning (SUPT) approach, focusing on the detailed context within subgraphs. In SUPT, prompt features are assigned at the subgraph-level, preserving the method's universal capability. This requires extremely fewer tuning parameters than fine-tuning-based methods, outperforming them in 42 out of 45 full-shot scenario experiments with an average improvement of over 2.5%. In few-shot scenarios, it excels in 41 out of 45 experiments, achieving an average performance increase of more than 6.6%.
- Exploring Visual Prompts for Adapting Large-Scale Models. arXiv preprint arXiv:2203.17274 (2022).
- Spectral clustering with graph neural networks for graph pooling. In International conference on machine learning. PMLR, 874–883.
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Visual Prompting for Adversarial Robustness. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1–5. https://doi.org/10.1109/ICASSP49357.2023.10097245
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805 (2018).
- A Survey of Vision-Language Pre-Trained Models. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, Lud De Raedt (Ed.). International Joint Conferences on Artificial Intelligence Organization, 5436–5443. https://doi.org/10.24963/ijcai.2022/762 Survey Track.
- Universal Prompt Tuning for Graph Neural Networks. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=0LmWBhIYLi
- Hongyang Gao and Shuiwang Ji. 2019. Graph u-nets. In international conference on machine learning. PMLR, 2083–2092.
- ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic acids research 40, D1 (2012), D1100–D1107.
- Adaptive transfer learning on graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 565–574.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
- Strategies for Pre-training Graph Neural Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=HJlWWJSFDH
- Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1857–1867.
- PRODIGY: Enabling In-context Learning Over Graphs. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=pLwYhNNnoR
- Memory-Based Graph Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=r1laNeBYPB
- Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6980
- Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. NIPS Workshop on Bayesian Deep Learning (2016).
- Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=SJU4ayYgl
- Self-attention graph pooling. In International conference on machine learning. PMLR, 3734–3743.
- The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 3045–3059. https://doi.org/10.18653/v1/2021.emnlp-main.243
- Training graph neural networks with 1000 layers. In International conference on machine learning. PMLR, 6437–6449.
- Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (Eds.). Association for Computational Linguistics, Online, 4582–4597. https://doi.org/10.18653/v1/2021.acl-long.353
- Exploring the benefits of visual prompting in differential privacy. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5158–5167.
- Overcoming catastrophic forgetting in graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 8653–8661.
- Towards deeper graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 338–348.
- Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35.
- Explicit visual prompting for low-level structure segmentations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19434–19445.
- GPT understands, too. AI Open (2023).
- Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In Proceedings of the ACM Web Conference 2023. 417–428.
- Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 723–731.
- Path integral based convolution and pooling for graph neural networks. Advances in Neural Information Processing Systems 33 (2020), 16421–16433.
- Large-scale comparison of machine learning methods for drug target prediction on ChEMBL. Chemical science 9, 24 (2018), 5441–5451.
- Hoang Nt and Takanori Maehara. 2019. Revisiting graph neural networks: All we have is low-pass filters. arXiv preprint arXiv:1905.09550 (2019).
- BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 24224–24235.
- Kenta Oono and Taiji Suzuki. 2020. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. In International Conference on Learning Representations. https://openreview.net/forum?id=S1ldO2EFPr
- Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1150–1160.
- Pre-trained models for natural language processing: A survey. Science China Technological Sciences 63, 10 (2020), 1872–1897.
- Teague Sterling and John J Irwin. 2015. ZINC 15–ligand discovery for everyone. Journal of chemical information and modeling 55, 11 (2015), 2324–2337.
- InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. In International Conference on Learning Representations.
- Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1717–1727.
- All in One: Multi-Task Prompting for Graph Neural Networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (¡conf-loc¿, ¡city¿Long Beach¡/city¿, ¡state¿CA¡/state¿, ¡country¿USA¡/country¿, ¡/conf-loc¿) (KDD ’23). Association for Computing Machinery, New York, NY, USA, 2120–2131. https://doi.org/10.1145/3580305.3599256
- Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems 34 (2021), 15920–15933.
- Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning. PMLR, 6105–6114.
- AutoVP: An Automated Visual Prompting Framework and Benchmark. In The Twelfth International Conference on Learning Representations.
- Deep Graph Infomax. In International Conference on Learning Representations. https://openreview.net/forum?id=rklz9iAcKQ
- Simplifying graph convolutional networks. In International conference on machine learning. PMLR, 6861–6871.
- Demystifying Oversmoothing in Attention-Based Graph Neural Networks. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=Kg65qieiuB
- MoleculeNet: a benchmark for molecular machine learning. Chemical science 9, 2 (2018), 513–530.
- Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022. 1070–1079.
- Towards effective and generalizable fine-tuning for pre-trained molecular graph models. bioRxiv (2022), 2022–02.
- A survey of pretraining on graphs: Taxonomy, methods, and applications. arXiv preprint arXiv:2202.07893 (2022).
- Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model. IEEE Transactions on Multimedia (2023), 1–13. https://doi.org/10.1109/TMM.2023.3291588
- How Powerful are Graph Neural Networks?. In International Conference on Learning Representations. https://openreview.net/forum?id=ryGs6iA5Km
- Representation learning on graphs with jumping knowledge networks. In International conference on machine learning. PMLR, 5453–5462.
- Self-supervised graph-level representation learning with local and global structure. In International Conference on Machine Learning. PMLR, 11548–11558.
- Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems 31 (2018).
- Graph contrastive learning with augmentations. Advances in neural information processing systems 33 (2020), 5812–5823.
- Bringing your own view: Graph contrastive learning without prefabricated data augmentations. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 1300–1309.
- Hao Yuan and Shuiwang Ji. 2020. StructPool: Structured Graph Pooling via Conditional Random Fields. In International Conference on Learning Representations. https://openreview.net/forum?id=BJxg_hVtwH
- Structure-feature based graph self-adaptive pooling. In Proceedings of The Web Conference 2020. 3098–3104.
- Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems 34 (2021), 15870–15882.
- Fan Zhou and Chengtai Cao. 2021. Overcoming catastrophic forgetting in graph neural networks with experience replay. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4714–4722.
- Understanding and resolving performance degradation in deep graph convolutional networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2728–2737.
- Evolution of resilience in protein interactomes across the tree of life. Proceedings of the National Academy of Sciences 116, 10 (2019), 4426–4433.
- Junhyun Lee (32 papers)
- Wooseong Yang (13 papers)
- Jaewoo Kang (83 papers)