Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representation Learning for Dynamic Graphs: A Survey (1905.11485v2)

Published 27 May 2019 in cs.LG and stat.ML

Abstract: Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets and highlight directions for future research.

Citations (405)

Summary

  • The paper provides a comprehensive review of methods for representing evolving graphs using encoder-decoder frameworks.
  • It categorizes encoding techniques such as temporal aggregation, RNN-based sequence models, and matrix decomposition to capture dynamic changes.
  • The survey outlines key challenges and future research avenues including scalable models and novel benchmarks for dynamic networks.

Representation Learning for Dynamic Graphs: A Survey

The paper "Representation Learning for Dynamic Graphs: A Survey" presents an extensive review of contemporary developments in the domain of representation learning specifically targeted at dynamic graphs. Dynamic graphs, unlike static graphs, evolve over time with nodes, edges, and attributes potentially changing, introducing a set of unique challenges in learning and inference processes. This survey focuses primarily on recent advancements in dynamic graph settings, encompassing both dynamic knowledge graphs and general dynamic networks.

At the core of the paper is an exploration of methods designed to capture the evolving nature of graph data, encompassing a range of encoder-decoder frameworks to learn representations that reflect the fluidity of interactions these graphs show. Encoders are constructed to extract features from nodes and relationships, which can then be leveraged by decoders for tasks such as node classification, link prediction, and graph classification. The review delineates these techniques, categorizing encoders into various types such as those based on aggregating temporal observations, using time as a regularizer, leveraging decomposition methods, random walk models, and using sequence models such as recurrent neural networks (RNNs).

Significant emphasis is placed on delineating the approaches for dealing with dynamic graphs. The paper provides insights into aggregating temporal observations into static forms, utilizing models based on temporal smoothness constraints, exploring spectral and matrix decomposition techniques; it also explores RNNs and their variations, pointing to their efficacy in handling temporal dependencies inherent in dynamic graphs. Notably, the survey recognizes the potential of diachronic and sequence model-based encoders to adapt to changing graph structures over time.

On the decoder side, the paper divides these into time-predicting decoders and time-conditioned decoders, highlighting their roles in forecasting the occurrence of events and making predictions tied to specific temporal contexts, respectively. Time-predicting decoders often employ temporal point processes to predict when something will occur, effectively capturing the continuous nature of time in these evolving structures. Time-conditioned decoders, on the other hand, make use of entity and relation embeddings that can adapt based on the input time, bridging the connection between temporal slices of dynamic graphs and their static counterparts.

The survey also explores the state-of-affairs concerning other models in dynamic graphs like statistical relational models and spatiotemporal graphs, revealing how these contribute to the broader picture of dynamic network analysis. The paper discerns a number of open questions and potential research avenues, ranging from the expansion of dataset benchmarks to more expressively powerful and computationally efficient models, which may advance the understanding and application of dynamic graph representations.

Practically, this comprehensive overview facilitates a foundational understanding for researchers aiming to apply or extend these approaches to real-world applications spanning social networks, biology, finance, and more. The survey not only catalogues existing methodologies but, importantly, it speculates on the frontier of opportunities for novel contributions and the future evolution of this pivotal area in artificial intelligence research, notably emphasizing the need for benchmarks, models capable of handling hyperedges, and innovations in time-sensitive learning frameworks.

X Twitter Logo Streamline Icon: https://streamlinehq.com