Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linkless Link Prediction via Relational Distillation (2210.05801v3)

Published 11 Oct 2022 in cs.LG

Abstract: Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.

Citations (31)

Summary

  • The paper presents the LLP framework that combines MLP speed with relational knowledge distillation for scalable link prediction.
  • It introduces novel rank-based and distribution-based matching strategies to distill complex neighborhood relationships.
  • LLP achieves up to 70.68× inference speedup while matching or surpassing GNN performance on multiple benchmarks.

Enhancing Link Prediction with Relational Knowledge Distillation in Graph Neural Networks

Introduction

Link prediction, a pivotal task in graph learning frameworks, seeks to ascertain the likelihood of a future association between two nodes within a graph. Traditionally, Graph Neural Networks (GNNs) have been employed to address this problem, utilizing their capability to harness neighborhood information to predict potential links. However, the dependency on local graph topologies inherently limits GNNs’ efficiency, particularly in large-scale applications where quick inference is paramount. Multi-Layer Perceptrons (MLPs), while efficient, fall short in performance due to their inability to incorporate relational graph knowledge inherently. This paper introduces a novel relational Knowledge Distillation (KD) framework, named Linkless Link Prediction (LLP), aimed at merging the efficiency of MLPs with the relational understanding of GNNs for link prediction tasks.

Methodology

The paper commences by examining conventional direct KD methods, including logit-based and representation-based matching for link prediction. However, these approaches showed limited effectiveness in capturing the relational complexities essential for accurate link predictions. To bridge this gap, the LLP framework is proposed, focusing not on individual nodes or pairs, but on the relationships within an anchor node's neighborhood. Specifically, LLP introduces two novel strategies, rank-based and distribution-based matching, to distill graph relational knowledge into MLPs. These strategies are designed to complement each other: while rank-based matching focuses on preserving the relational order among context nodes relative to an anchor node, distribution-based matching seeks to emulate the relational distribution, enabling a comprehensive transfer of graph structure knowledge to MLPs.

Experimental Evaluation

LLP's efficacy is extensively evaluated across eight public benchmarks, demonstrating its ability to outperform standalone MLPs significantly in both transductive and production settings. Remarkably, LLP not only approaches but, in several instances, surpasses the performance of its GNN teachers, indicating its potential in effectively leveraging graph relational knowledge. Furthermore, LLP demonstrates a striking inference speedup, up to 70.68× faster than GNNs on large-scale datasets, underscoring its practicality for applications demanding real-time inference.

Implications and Future Directions

The proposed LLP framework epitomizes a significant stride towards resolving the dichotomy between efficiency and performance in graph-based link prediction tasks. By innovatively distilling relational graph knowledge into MLPs, LLP paves the way for deploying scalable and efficient link prediction models without substantially compromising accuracy. The research opens avenues for further exploration into relational KD techniques and their applications across diverse graph learning scenarios.

The findings also suggest intriguing possibilities for future research, including the exploration of more sophisticated context node sampling strategies and the adaptation of LLP across a broader array of graph-based tasks. Additionally, the concept of relational KD can potentially be extended beyond graph structures, offering a rich domain for academic inquiry.

Conclusion

LLP represents a novel paradigm in graph-based link prediction by ingeniously leveraging relational knowledge distillation. The combination of performance, efficiency, and scalability that LLP offers marks a considerable advancement in the field, bridging the gap between graph neural networks and MLPs. This work not only contributes to the theoretical foundations of graph learning but also has significant practical implications for real-world applications requiring rapid link prediction inference.