Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

K-Core based Temporal Graph Convolutional Network for Dynamic Graphs (2003.09902v4)

Published 22 Mar 2020 in cs.LG, cs.SI, and stat.ML

Abstract: Graph representation learning is a fundamental task in various applications that strives to learn low-dimensional embeddings for nodes that can preserve graph topology information. However, many existing methods focus on static graphs while ignoring evolving graph patterns. Inspired by the success of graph convolutional networks(GCNs) in static graph embedding, we propose a novel k-core based temporal graph convolutional network, the CTGCN, to learn node representations for dynamic graphs. In contrast to previous dynamic graph embedding methods, CTGCN can preserve both local connective proximity and global structural similarity while simultaneously capturing graph dynamics. In the proposed framework, the traditional graph convolution is generalized into two phases, feature transformation and feature aggregation, which gives the CTGCN more flexibility and enables the CTGCN to learn connective and structural information under the same framework. Experimental results on 7 real-world graphs demonstrate that the CTGCN outperforms existing state-of-the-art graph embedding methods in several tasks, including link prediction and structural role classification. The source code of this work can be obtained from \url{https://github.com/jhljx/CTGCN}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jingxin Liu (22 papers)
  2. Chang Xu (323 papers)
  3. Chang Yin (1 paper)
  4. Weiqiang Wu (5 papers)
  5. You Song (15 papers)
Citations (37)

Summary

Overview of K-Core Based Temporal Graph Convolutional Network for Dynamic Graphs

The paper presents a novel graph convolutional network (GCN) framework aimed at extending the capabilities of current graph neural networks in processing dynamic graphs. Titled "K-Core based Temporal Graph Convolutional Network for Dynamic Graphs", this research elaborates on a method called CTGCN that leverages the concept of k-core decomposition to enhance the structural and connective properties maintained within dynamic graph embeddings.

Key Concepts and Methodology

Graph representation learning has seen significant strides, but existing methods generally focus on static rather than dynamic graphs. Dynamic graphs, characterized by evolving node and edge configurations over time, present additional complexity that static models simply can't capture. This paper introduces a k-core based temporal GCN, or CTGCN, which is designed to address these challenges.

Generalized GCN Framework

The CTGCN builds upon a generalized GCN framework that splits the convolution process into two main phases: feature transformation and feature aggregation. By doing so, it allows the integration of both local (connective proximity) and global (structural similarity) information within a unified framework. This generalized approach can incorporate various GCN techniques, enhancing flexibility and applicability across different graph configurations.

Core-based Convolutional Module

The main novelty of CTGCN lies in its core-based convolutional module. Utilizing k-core decomposition, which partitions graphs into nested subgraphs based on connectivity density, the CTGCN layers can propagate node features across multiple scales. This approach allows for better capture of graph hierarchies and improves the expressiveness of node embeddings.

Graph Evolution Module

In addition to structural enrichment, CTGCN employs a graph evolution module to model temporal dependencies. This module facilitates the understanding of dynamic interactions by leveraging recurrent neural networks (e.g., GRUs, LSTMs). Such integration enables the CTGCN to naturally learn from historical patterns within dynamic graphs, providing a robust mechanism for capturing changes over time.

Experimental Insights

The CTGCN was evaluated across several real-world dynamic graph datasets, including social networks and communication graphs, demonstrating superior performance in tasks like link prediction and structural role classification. Notably, CTGCN outperformed existing models like DynGEM, EvolveGCN, and GCRN, indicating its efficacy in capturing both node-level interactions and evolving graph patterns.

Performance and Scalability

Experiments illustrated that CTGCN not only excels in maintaining accurate predictions over time but also remains computationally efficient. The time complexity is nearly linear concerning the number of nodes and edges, making it a practical solution for large-scale dynamic graphs.

Implications and Future Directions

The introduction of CTGCN marks a promising advance in dynamic graph processing, offering insights into how temporal and structural properties can be concurrently preserved within graph neural networks. Beyond immediate applications, such as anomaly detection and recommendation systems, this work sets a framework for future research in graph-based machine learning.

Future improvements could explore integrating more sophisticated temporal architectures or optimizing k-core computations to further enhance scalability. Additionally, adaptions allowing for real-time graph analytics and response could position CTGCN as a core component within dynamic, data-driven systems.

In conclusion, the CTGCN represents a considerable step forward in dynamic graph embedding techniques, aligning graph neural networks more closely with practical, real-world data applications. Its success may stimulate further exploration into multi-scale, temporal graph learning paradigms, encouraging the development of even more robust and versatile models.