Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-GNN: On Few-shot Node Classification in Graph Meta-learning (1905.09718v1)

Published 23 May 2019 in cs.LG and stat.ML

Abstract: Meta-learning has received a tremendous recent attention as a possible approach for mimicking human intelligence, i.e., acquiring new knowledge and skills with little or even no demonstration. Most of the existing meta-learning methods are proposed to tackle few-shot learning problems such as image and text, in rather Euclidean domain. However, there are very few works applying meta-learning to non-Euclidean domains, and the recently proposed graph neural networks (GNNs) models do not perform effectively on graph few-shot learning problems. Towards this, we propose a novel graph meta-learning framework -- Meta-GNN -- to tackle the few-shot node classification problem in graph meta-learning settings. It obtains the prior knowledge of classifiers by training on many similar few-shot learning tasks and then classifies the nodes from new classes with only few labeled samples. Additionally, Meta-GNN is a general model that can be straightforwardly incorporated into any existing state-of-the-art GNN. Our experiments conducted on three benchmark datasets demonstrate that our proposed approach not only improves the node classification performance by a large margin on few-shot learning problems in meta-learning paradigm, but also learns a more general and flexible model for task adaption.

Meta-GNN: A Novel Approach for Few-Shot Node Classification in Graph Meta-Learning

The paper "Meta-GNN: On Few-shot Node Classification in Graph Meta-learning" introduces a novel framework called Meta-GNN, designed to address the challenge of few-shot learning in non-Euclidean domains. Specifically, the paper focuses on the problem of node classification within graph data using graph neural networks (GNNs). Unlike traditional deep learning models that excel in Euclidean domains such as images and text, GNNs face challenges in few-shot learning scenarios where only a limited number of labeled data points are available for new classes. The Meta-GNN framework leverages the meta-learning paradigm, creating a robust mechanism allowing GNNs to quickly adapt to new tasks with minimal labeled data.

Key Contributions and Methodology

Meta-GNN offers significant advancements in the approach to node classification in graphs, primarily through meta-learning. The framework is built to generalize across tasks by training on a diverse set of few-shot learning tasks, thereby establishing a foundation that facilitates adaptation to new classes with very few examples. Meta-GNN is structured to be compatible with any GNN architecture, allowing for its integration into existing models. The research utilizes modern GNN architectures, such as Graph Convolutional Networks (GCN) and Simple Graph Convolution (SGC), and integrates them into the meta-learning cycle.

The authors introduce a method to create training scenarios representative of few-shot tasks by sampling from existing node classes within a graph. Each task is divided into a support set and a query set, and tasks are repeatedly sampled to build a comprehensive meta-training set. The model parameters are then updated via a meta-learning strategy, inspired by MAML (Model-Agnostic Meta-Learning), which optimizes the initial parameters across tasks to ensure rapid adaptation when exposed to novel tasks.

Experimental Evaluation

The empirical performance of Meta-GNN was evaluated using three benchmark datasets: Cora, Citeseer, and Reddit. The results indicate a marked improvement in node classification under few-shot learning settings compared to traditional GNNs and embedding-based approaches such as DeepWalk and Node2Vec. Particularly on challenging datasets with fewer samples, Meta-GNN displayed a significant advantage. For instance, the Meta-GNN model achieved notable performance gains in both 1-shot and 3-shot scenarios when compared to standard baselines. This highlights the model's ability to generalize across tasks and quickly adapt to new environments with minimal data.

Implications and Future Directions

The successful application of meta-learning in the context of non-Euclidean data structures, as demonstrated by Meta-GNN, opens up new avenues for research. The framework not only enhances the performance of node classification tasks with limited labeled data but also suggests potential for extending meta-learning techniques to other challenging graph-related problems. Future research could explore the adaptation of Meta-GNN to problems like few-shot graph classification and zero-shot learning, where the model would encounter entirely novel classes not present during training. Additionally, the implications for practical applications are extensive, ranging from social network analysis to bioinformatics.

The Meta-GNN framework represents a promising step forward in meta-learning for graphs, with its ability to provide a robust foundation for learning in few-shot scenarios. As the field progresses, it will be intriguing to observe its adaptation and enhancement in light of emerging graph data challenges and meta-learning methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fan Zhou (110 papers)
  2. Chengtai Cao (3 papers)
  3. Kunpeng Zhang (31 papers)
  4. Goce Trajcevski (11 papers)
  5. Ting Zhong (4 papers)
  6. Ji Geng (2 papers)
Citations (205)