Papers
Topics
Authors
Recent
2000 character limit reached

DynaGen: Unifying Temporal Knowledge Graph Reasoning with Dynamic Subgraphs and Generative Regularization (2512.12669v1)

Published 14 Dec 2025 in cs.LG and cs.AI

Abstract: Temporal Knowledge Graph Reasoning (TKGR) aims to complete missing factual elements along the timeline. Depending on the temporal position of the query, the task is categorized into interpolation and extrapolation. Existing interpolation methods typically embed temporal information into individual facts to complete missing historical knowledge, while extrapolation techniques often leverage sequence models over graph snapshots to identify recurring patterns for future event prediction. These methods face two critical challenges: limited contextual modeling in interpolation and cognitive generalization bias in extrapolation. To address these, we propose a unified method for TKGR, dubbed DynaGen. For interpolation, DynaGen dynamically constructs entity-centric subgraphs and processes them with a synergistic dual-branch GNN encoder to capture evolving structural context. For extrapolation, it applies a conditional diffusion process, which forces the model to learn underlying evolutionary principles rather than just superficial patterns, enhancing its ability to predict unseen future events. Extensive experiments on six benchmark datasets show DynaGen achieves state-of-the-art performance. On average, compared to the second-best models, DynaGen improves the Mean Reciprocal Rank (MRR) score by 2.61 points for interpolation and 1.45 points for extrapolation.

Summary

  • The paper introduces an integrated approach that unifies interpolation and extrapolation using dynamic subgraph construction and diffusion-based generative regularization.
  • The method employs a dual-branch GNN and a hybrid Transformer/MLP-Mixer, achieving +2.61 MRR for interpolation and +1.45 MRR for extrapolation compared to previous models.
  • The architecture enhances temporal reasoning and robustness for event forecasting and knowledge base completion, setting a new state of the art in temporal KG analysis.

DynaGen: A Unified Architecture for Temporal Knowledge Graph Reasoning

Motivation and Problem Statement

Temporal Knowledge Graph Reasoning (TKGR) encompasses two critical tasks: interpolation (filling in missing historical facts) and extrapolation (predicting novel future events) on knowledge graphs indexed by time. Contemporary methods frequently suffer from limited contextual modeling for interpolation, wherein temporal facts are treated as nearly isolated quadruples, and from cognitive generalization bias in extrapolation, where sequence models over historical data induce myopic dependence on empirical patterns, inhibiting reasoning about emergent, unseen dynamics. These limitations are summarized in (Figure 1). Figure 1

Figure 1: Classical temporal knowledge graph approaches suffer from shallow contextual modeling in interpolation and generalization limitations in extrapolation.

DynaGen Architecture

DynaGen systematically unifies interpolation and extrapolation by combining a dynamic subgraph construction mechanism with generative regularization in an end-to-end architecture. The core pipeline is depicted in (Figure 2). Figure 2

Figure 2: DynaGen framework: Dynamic subgraph extraction, dual-branch GNN encoding, generative diffusion-based regularization, Transformer-MLP/Mixer refinement, and final prediction.

Adaptive Dynamic Subgraph Construction

To overcome the isolation of quadruple-centric approaches, DynaGen dynamically constructs for every query a temporally-weighted, entity-centric subgraph. The construction employs an MLP-driven predictor for the time window Δt\Delta t, BFS expansion with multi-hop temporal neighbor aggregation, and exponentially decaying edge weights. This yields a compact yet contextually rich local region for message propagation.

Synergistic Structure-Aware Encoding

The structural encoder employs a dual-branch GNN:

  • R-GCN Branch: Encodes inter-entity relation semantics with type-specific convolution.
  • GAT Branch: Computes attention-based neighborhood weighting, capturing structural salience.

Messages from both branches are gated by temporal recency and fused to produce representations enriched for semantics, structure, and time. The ablation paper confirms the criticality of the SSAE module: removing it incurs a 2.08 point MRR drop on YAGO.

Diffusion-based Generative Regularization

To address inductive bias in extrapolation, DynaGen applies a conditional diffusion regularization to SSAE outputs. During training, the GNN output is corrupted via a forward diffusion process and a denoising network is trained to reconstruct the clean representation given only the query relation and time. This enforces that the network learns generalizable generative principles over local subgraph evolution rather than memorizing patterns. This module is disabled during inference.

Unified Contextual Reasoning

Post-regularization embeddings and local 1-hop contexts are processed by a hybrid Transformer/MLP-Mixer module. The Transformer encoder models atomic dependencies, while the Mixer aggregates information efficiently across the sequence. The final query representation is then used to rank all candidate entities.

Empirical Results

DynaGen sets a new state of the art, achieving +2.61 MRR for interpolation and +1.45 MRR for extrapolation over the respective second-best models on average across six benchmarks. On ICEWS14 (interpolation), DynaGen achieves 72.14 MRR, outperforming the previous best (ECEformer) by 2.68 points; for GDELT (extrapolation), DynaGen reaches 52.41 MRR versus 50.89 for the runner-up.

The ablation shows that both SSAE and diffusion regularization are essential: omitting SSAE reduces performance far more dramatically than omitting diffusion, quantifying the impact of both modules.

Model Depth Analysis

As shown in (Figure 3), optimum performance is reached with two SSAE layers; deeper GNNs cause accuracy degradation due to over-smoothing effects. Metrics never surpass the 2-layer setting. Figure 3

Figure 3: Link prediction metrics as a function of SSAE layer depth, peaking at two layers.

Theoretical and Practical Implications

The dual-branch GNN with adaptable temporal context ensures that model representations encode multi-faceted signal, surpassing previous models constrained to static or per-quadruple patterns. The inclusion of diffusion-based generative regularization directly tackles extrapolation bias, a deficiency observed in all multi-stage sequence models reliant on historical recurrence.

Practically, this unification implies that downstream TKGR applications—such as event forecasting and time-sensitive knowledge base completion—will exhibit both improved generalization on novel patterns and greater robustness to distributional shift. The architecture is agnostic to a specific task formulation and supports efficient batched inference.

Limitations and Future Directions

Despite its efficacy, the pipeline incurs non-trivial training cost, notably in dynamic subgraph generation for each query, and relies on heuristic subgraph construction that may include spurious or exclude semantically distant signals. Extending the framework with learned or neural subgraph extractors and developing sublinear sampling or caching techniques could further ameliorate overhead and sensitivity to hyperparameters.

Conclusion

DynaGen presents an end-to-end paradigm for temporal knowledge graph reasoning that addresses critical deficiencies in prior art through dynamic context modeling and generative regularization. Its architecture consistently demonstrates superior empirical performance in both interpolation and extrapolation. This unified framework sets a strong foundation for extending generative and structural reasoning strategies in temporally-evolving relational domains.

Whiteboard

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.