Papers
Topics
Authors
Recent
2000 character limit reached

Foundations and modelling of dynamic networks using Dynamic Graph Neural Networks: A survey (2005.07496v2)

Published 13 May 2020 in cs.SI, cs.LG, and stat.ML

Abstract: Dynamic networks are used in a wide range of fields, including social network analysis, recommender systems, and epidemiology. Representing complex networks as structures changing over time allow network models to leverage not only structural but also temporal patterns. However, as dynamic network literature stems from diverse fields and makes use of inconsistent terminology, it is challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a lot of attention in recent years for their ability to perform well on a range of network science tasks, such as link prediction and node classification. Despite the popularity of graph neural networks and the proven benefits of dynamic network models, there has been little focus on graph neural networks for dynamic networks. To address the challenges resulting from the fact that this research crosses diverse fields as well as to survey dynamic graph neural networks, this work is split into two main parts. First, to address the ambiguity of the dynamic network terminology we establish a foundation of dynamic networks with consistent, detailed terminology and notation. Second, we present a comprehensive survey of dynamic graph neural network models using the proposed terminology

Citations (195)

Summary

  • The paper introduces a dynamic network taxonomy that clarifies inconsistent terminology by categorizing evolving networks into a structured ‘network cube.’
  • The paper surveys DGNN models that integrate spatial and temporal dynamics, comparing stacked architectures with integrated approaches using RNNs and graph convolutions.
  • The paper outlines future research directions, emphasizing scalable continuous frameworks and extended applications such as dynamic link prediction and multiplex network analysis.

Overview of Dynamic Graph Neural Networks for Modelling Dynamic Networks

The paper "Foundations and Modeling of Dynamic Networks Using Dynamic Graph Neural Networks: A Survey" by Joakim Skarding et al. provides a comprehensive examination of dynamic networks, particularly in the context of Dynamic Graph Neural Networks (DGNNs). This survey navigates the challenges posed by the diverse terminology across interdisciplinary dynamic network research and systematically organizes existing methodologies around DGNNs to offer a clearer understanding of the field.

Dynamic networks, characterized by their temporal change in structure, are integral within various domains such as social network analysis, recommender systems, and epidemiology. These networks, inherently dynamic due to properties like time-varying nodes and edges, introduce complexities not addressed in traditional static network models. DGNNs are a class of neural networks aimed at effectively capturing and predicting temporal patterns and structural changes within these networks.

Contributions and Framework

The paper is divided into two primary sections. The first part provides a theoretical foundation and taxonomy for dynamic networks, addressing the complications due to inconsistent terminology. These dynamic networks are segregated into categories based on node dynamics, link duration, and temporal granularity, framed as a "dynamic network cube." Such a structure aids in distinguishing different kinds of evolving and temporal networks, which include node-static, node-dynamic, interaction, and strictly evolving networks, across discrete and continuous representations.

The second part surveys DGNN models, focusing on how these models use neural architectures to encode both spatial and temporal dynamics in networks. The surveyed models are grouped as:

  1. Stacked DGNNs - which layer temporal models like RNNs on top of GNNs, encoding network snapshots individually.
  2. Integrated DGNNs - where the temporal and spatial components are integrated into a unified architecture, such as integrating graph convolution operations within LSTMs.

The paper highlights the versatility of DGNN architectures, which incorporate a variety of GNN types (e.g., GCN, GGNN, R-GCN) and adapt LSTM, GRU, or self-attention mechanisms to model temporal dependencies across graph snapshots. The landscape of DGNNs is further expanded by differentiating between discrete models, which consider graph snapshots, and continuous models, which attempt to encode interactions in real-time using methods like Temporal Point Processes (TPP) and time embeddings.

Implications and Future Directions

The research emphasizes the necessity for coherent definitions and terminologies as critical in advancing research in dynamic networks and leveraging the modelling advancements achieved with DGNNs. Importantly, DGNNs offer potential improvements in tasks such as dynamic link prediction, enabling better forecasting of changes in real-world network structures.

There are significant opportunities for future research, including:

  • Continuous Model Expansion: Advancing continuous DGNN frameworks to handle various network dynamics beyond specific cases (e.g., strictly evolving or interaction networks).
  • Scalability and Efficiency: Developing models that efficiently handle large-scale dynamic datasets, maintaining both high temporal granularity and runtime efficiency.
  • Extended Applications: Expanding to dynamic networks featuring multi-type nodes and edges, directed and signed interactions, and multilayer or multiplex network structures.

In conclusion, the survey by Joakim Skarding et al. equips researchers with a detailed understanding of the dynamic network modeling landscape, while also elucidating paths for future advancements in the field of DGNNs.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.