Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment (2302.07491v3)

Published 15 Feb 2023 in cs.LG, cs.AI, and cs.SI

Abstract: Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information, which has recently garnered increasing attention. In contrast to static graphs, temporal graphs are typically organized as node interaction sequences over continuous time rather than an adjacency matrix. Most temporal graph learning methods model current interactions by incorporating historical neighborhood. However, such methods only consider first-order temporal information while disregarding crucial high-order structural information, resulting in suboptimal performance. To address this issue, we propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information to learn more informative node representations. Notably, the initial node representations combine first-order temporal and high-order structural information differently to calculate two conditional intensities. An alignment loss is then introduced to optimize the node representations, narrowing the gap between the two intensities and making them more informative. Concretely, in addition to modeling temporal information using historical neighbor sequences, we further consider structural knowledge at both local and global levels. At the local level, we generate structural intensity by aggregating features from high-order neighbor sequences. At the global level, a global representation is generated based on all nodes to adjust the structural intensity according to the active statuses on different nodes. Extensive experiments demonstrate that the proposed model S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Meng Liu (112 papers)
  2. Ke Liang (41 papers)
  3. Yawei Zhao (12 papers)
  4. Wenxuan Tu (23 papers)
  5. Sihang Zhou (37 papers)
  6. Xinwang Liu (101 papers)
  7. Kunlun He (15 papers)
  8. Xinbiao Gan (5 papers)
Citations (40)