GRExplainer: TGNN Explanation Framework
- The paper introduces GRExplainer, a universal framework unifying node sequence abstraction for both snapshot-based and event-based TGNN explanation.
- It leverages a two-level generative RNN and retained-matrix formalism to synthesize structurally coherent subgraph explanations efficiently.
- Empirical results on six datasets demonstrate up to 98% runtime reduction and significant gains in fidelity and sparsity over previous methods.
GRExplainer is a universal, efficient, and user-friendly post-hoc explanation framework for Temporal Graph Neural Networks (TGNNs), designed to produce interpretable and coherent explanations for both snapshot-based and event-based dynamic graphs. By unifying the abstraction of graph input as node sequences, leveraging generative recurrent neural network modeling, and emphasizing efficiency and output cohesion, GRExplainer addresses major limitations of prior TGNN explanation methods regarding generality, computational tractability, and user accessibility (Li et al., 28 Dec 2025).
1. Motivation and Core Challenges
Explainability is critical for the deployment of TGNNs in domains such as fraud detection, social recommendation, and network security due to the black-box nature of deep models and the need for robust, trustworthy decision making. Previous TGNN explainers exhibit three significant drawbacks:
- Type specificity: Most tools are tailored to either discrete-time (snapshot-based) or continuous-time (event-based) TGNNs, hindering cross-model applicability.
- Computational inefficiency: Edge-level perturbation and search-based methods (e.g., MCTS) have costs scaling at least linearly with the number of edges, making them impractical for large-scale, high-frequency graphs.
- Lack of structural cohesion and accessibility: Outputs often comprise disconnected nodes or edges and frequently require prior knowledge about model structure or desired explanation size, increasing user burden and decreasing interpretability (Li et al., 28 Dec 2025).
2. Node Sequence Unification and Retained-Matrix Formalism
GRExplainer abstracts local subgraphs as "node sequences," forming a unified feature representation:
- For snapshot-based inputs, nodes are ordered by breadth-first search (BFS) and timestamps set to a fixed timeslot index.
- For event-based (continuous-time) graphs, represents true interaction timestamps, and nodes are sorted temporally.
The "retained-matrix" constrains allowed node connections for the generative process, ensuring output subgraph connectivity and limiting redundancy. Formally,
where is set to match the maximal BFS-layer width observed in the subgraph. This formulation is crucial for scalability and supports both graph types (Li et al., 28 Dec 2025).
3. Generative RNN Architecture for Explanation Synthesis
GRExplainer employs a two-level generative recurrent neural network (RNN) architecture:
- Graph-level RNN (): A GRU cell processes sequential adjacency (retained) matrices and computes hidden states per time step/node.
- Edge-level RNN (): Another GRU, conditioned on and seeded with random noise , predicts binary adjacency vectors .
- An MLP layer transforms each into edge probability logits , generating the subgraph explanation.
The generative semantics follow:
This model automatically enforces structural connectivity and enables explanation of arbitrary TGNN predictions in a differentiable, user-free manner (i.e., without requiring manual parameter tuning) (Li et al., 28 Dec 2025).
4. Loss Function, Optimization, and Algorithmic Workflow
Training of the generative explainer uses a binary cross-entropy objective augmented with two regularization terms:
- The term penalizes explanation size, promoting sparsity.
- The fidelity term enforces that the TGNN’s output on the explanation subgraph matches its prediction on the original input .
- Hyperparameters and control regularization strength.
Algorithmically, the process involves:
- Extracting a local subgraph and node sequence , building .
- Unrolling the RNN to compute edge probabilities and assemble .
- Backpropagating to update network parameters.
Separate templates are specified for both snapshot- and event-based TGNNs, but the same underlying generative approach and loss apply (Li et al., 28 Dec 2025).
5. Computational Complexity and Comparative Efficiency
GRExplainer achieves per-instance time complexity , where is the number of nodes in the extracted subgraph and is the maximal BFS-layer width. This is a major improvement over existing approaches (e.g., MCTS, edge perturbation) that exhibit or worse complexity. Empirical results indicate up to faster inference compared to the fastest prior method on the Mooc dataset and up to runtime reduction on event graphs, supporting application to large-scale, high-frequency temporal graphs (Li et al., 28 Dec 2025).
6. Empirical Results, Metrics, and Cohesion
Evaluation on six real-world datasets (Reddit-Binary, Bitcoin-Alpha, Bitcoin-OTC for snapshots; Reddit, Wikipedia, Mooc for event-based) with EvolveGCN, TGAT, and TGN as target TGNNs demonstrates:
- Generality: Applicability to all major TGNN architectures and graph formats.
- Fidelity and Sparsity: As measured by FID+ and AUFSC, GRExplainer provides superior explanation quality—up to (AUFSC, TGN) and (FID+, TGN) over type-matched baselines.
- Cohesiveness: Explanations form connected subgraphs, consistently outscoring competitors on connectivity metrics.
- User-friendliness: No reliance on prior knowledge of model parameters or desired explanation size; explanations are generated directly via the trained model (Li et al., 28 Dec 2025).
A summary of empirical highlights:
| Dataset/Model | AUFSC Gain | FID+ Gain | Speed-up |
|---|---|---|---|
| EvolveGCN | 60.3% | 194% | — |
| TGAT | 283% | 10,125% | Up to 98% |
| TGN | 299% | 35,440% | Up to 98% |
| Mooc (Event) | — | — | 16× faster |
7. Limitations and Prospective Directions
GRExplainer currently generates explanations at the instance level (per-prediction). Global or class-level explanation remains open. Whole-graph classification tasks may involve higher computational overhead, suggesting the value of node selection or summarization extensions. The present architecture is limited to homogeneous node and edge types; generalizing to heterogeneous dynamic graphs will likely require type-aware modeling. Extensions toward multi-task settings and other dynamic graph variations are promising directions (Li et al., 28 Dec 2025).
GRExplainer represents the first TGNN explanation framework general across input type and model, leveraging sequence-based unification and generative RNNs with strong fidelity, connectivity, and efficiency guarantees (Li et al., 28 Dec 2025).