Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Graph Representation Learning via Self-Attention Networks (1812.09430v2)

Published 22 Dec 2018 in cs.LG, cs.SI, and stat.ML

Abstract: Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization. Previous methods on graph representation learning mainly focus on static graphs, however, many real-world graphs are dynamic and evolve over time. In this paper, we present Dynamic Self-Attention Network (DySAT), a novel neural architecture that operates on dynamic graphs and learns node representations that capture both structural properties and temporal evolutionary patterns. Specifically, DySAT computes node representations by jointly employing self-attention layers along two dimensions: structural neighborhood and temporal dynamics. We conduct link prediction experiments on two classes of graphs: communication networks and bipartite rating networks. Our experimental results show that DySAT has a significant performance gain over several different state-of-the-art graph embedding baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aravind Sankar (9 papers)
  2. Yanhong Wu (18 papers)
  3. Liang Gou (18 papers)
  4. Wei Zhang (1489 papers)
  5. Hao Yang (328 papers)
Citations (115)

Summary

We haven't generated a summary for this paper yet.