Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs (2304.10668v1)

Published 20 Apr 2023 in cs.LG

Abstract: How can we learn effective node representations on textual graphs? Graph Neural Networks (GNNs) that use LLMs (LMs) to encode textual information of graphs achieve state-of-the-art performance in many node classification tasks. Yet, combining GNNs with LMs has not been widely explored for practical deployments due to its scalability issues. In this work, we tackle this challenge by developing a Graph-Aware Distillation framework (GRAD) to encode graph structures into an LM for graph-free, fast inference. Different from conventional knowledge distillation, GRAD jointly optimizes a GNN teacher and a graph-free student over the graph's nodes via a shared LM. This encourages the graph-free student to exploit graph information encoded by the GNN teacher while at the same time, enables the GNN teacher to better leverage textual information from unlabeled nodes. As a result, the teacher and the student models learn from each other to improve their overall performance. Experiments in eight node classification benchmarks in both transductive and inductive settings showcase GRAD's superiority over existing distillation approaches for textual graphs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Costas Mavromatis (11 papers)
  2. Vassilis N. Ioannidis (34 papers)
  3. Shen Wang (111 papers)
  4. Da Zheng (50 papers)
  5. Soji Adeshina (13 papers)
  6. Jun Ma (347 papers)
  7. Han Zhao (159 papers)
  8. Christos Faloutsos (88 papers)
  9. George Karypis (110 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.