Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs (1902.10191v3)

Published 26 Feb 2019 in cs.LG, cs.SI, and stat.ML

Abstract: Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics. These methods require the knowledge of a node in the full time span (including both training and testing) and are less applicable to the frequent change of the node set. In some extreme scenarios, the node sets at different time steps may completely differ. To resolve this challenge, we propose EvolveGCN, which adapts the graph convolutional network (GCN) model along the temporal dimension without resorting to node embeddings. The proposed approach captures the dynamism of the graph sequence through using an RNN to evolve the GCN parameters. Two architectures are considered for the parameter evolution. We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. The code is available at \url{https://github.com/IBM/EvolveGCN}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Aldo Pareja (7 papers)
  2. Giacomo Domeniconi (7 papers)
  3. Jie Chen (602 papers)
  4. Tengfei Ma (73 papers)
  5. Toyotaro Suzumura (60 papers)
  6. Hiroki Kanezashi (15 papers)
  7. Tim Kaler (7 papers)
  8. Tao B. Schardl (8 papers)
  9. Charles E. Leiserson (10 papers)
Citations (932)

Summary

EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs

The paper "EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs" presents a novel methodological advancement in the paper of graph representation learning when dealing with dynamic, time-evolving graph structures. Recognizing the limitations of static models in capturing temporal dynamics, the authors propose a multi-faceted approach that utilizes recurrent neural networks (RNNs) to adapt Graph Convolutional Network (GCN) parameters across different time steps, thereby effectively addressing frequent and significant changes in node sets.

Introduction

The resurgence of interest in graph representation learning is attributed to the successful applications of deep learning in handling Euclidean data structures like images and text. Extending these successes to graphs, which represent complex pairwise interactions between entities, poses unique challenges given their non-Euclidean nature and combinatorial complexity. Real-world applications often involve dynamically evolving graphs, necessitating models that can adapt to temporal changes. Examples include evolving social networks, citation networks, and financial transaction networks. Previously, methods typically relied on node embeddings regulated by RNNs to capture temporal dynamics, which necessitated knowledge of nodes across the complete time span. This becomes problematic when node sets change frequently or entirely between time steps.

Methodology

The core contribution of the paper is EvolveGCN, a method that eschews the conventional focus on node embeddings in favor of adapting the model parameters themselves using RNNs. The design leverages two main architectures for model adaptation:

  1. EvolveGCN-H: Here, the GCN weights act as hidden states of a GRU, evolving over time with the node embeddings serving as inputs. Mathematically, this is represented as:

Wt(l)=GRU(Ht(l),Wt1(l))W_t^{(l)} = \text{GRU}(H_t^{(l)}, W_{t-1}^{(l)})

  1. EvolveGCN-O: In this configuration, the GCN weights are treated as outputs of the LSTM architecture, focusing solely on the dynamic update of the model parameters. This avoids the need for node features as inputs:

Wt(l)=LSTM(Wt1(l))W_t^{(l)} = \text{LSTM}(W_{t-1}^{(l)})

Both versions aim to maintain the adaptability of GCN parameters without increasing the model size with additional time steps, offering high manageability akin to typical RNN models.

Experimental Evaluation

The proposed EvolveGCN models were comprehensively evaluated on multiple tasks and datasets, including synthetic and real-world datasets such as SBM, Bitcoin OTC, Bitcoin Alpha, UCI Messages, Autonomous Systems, Reddit Hyperlink Network, and Elliptic. Key tasks included link prediction, edge classification, and node classification. Here are some notable findings from the experiments:

  • Link Prediction: EvolveGCN outperformed traditional GCN and GCN-GRU models on most datasets, illustrating the effectiveness of parameter evolution in capturing the temporal dynamics. Specifically, EvolveGCN-H showed superior performance on datasets like UCI and AS, while EvolveGCN-O demonstrated its strengths on SBM and UCI with high MAP and MRR scores.
  • Edge Classification: Both variants of EvolveGCN achieved higher F1 scores across datasets such as Bitcoin OTC, Bitcoin Alpha, and Reddit, compared to the baseline methods.
  • Node Classification: Evaluated on the Elliptic dataset, EvolveGCN-O showed improved results for classifying illicit transactions, though both dynamic models effectively handled the emergence of unprecedented events like the dark market shutdown.

Implications and Future Work

The use of EvolveGCN provides a robust framework for handling dynamic graph data, addressing issues related to the appearance and disappearance of nodes. Practical implications include enhanced adaptability for real-time applications in social network analysis, financial fraud detection, and evolving citation networks. Theoretically, this approach offers a compelling argument for focusing on model adaptation rather than node embeddings, particularly in non-stationary environments.

Future work could explore further refinements in the integration of RNN architectures and GCNs, perhaps leveraging attention mechanisms or continuous-time methods to further enhance the temporal sensitivity of the models. Additionally, expanding the application of EvolveGCN to more diverse types of graphs and tasks could substantiate its generalizability and robustness.

In conclusion, "EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs" presents a significant advancement in dynamic graph processing, with compelling evidence of its efficacy across multiple benchmarks and tasks. The methodological shift towards evolving model parameters marks a vital step in the ongoing development of graph neural networks adaptable to real-world temporal complexities.

Github Logo Streamline Icon: https://streamlinehq.com