Graph Temporal Classification
- Graph Temporal Classification is a framework for classifying graph-structured data by capturing the evolution of nodes and edges over time.
- It employs diverse methods such as graph-shapelet patterns, temporal kernels, and persistent homology to extract discriminative temporal features.
- GTC supports applications like outbreak detection, cybersecurity anomaly monitoring, and speech recognition with dynamic, interpretable models.
Graph Temporal Classification (GTC) refers to a family of methodologies and frameworks focused on the supervised classification of graph-structured data where temporal dynamics—either in the form of evolving node/edge attributes or the graph’s topology itself—are key to the predictive task. GTC sits at the intersection of graph data mining and temporal (sequence) analysis, addressing scenarios where the order and evolution of relationships carry crucial classification signal. Research in this area spans symbolic pattern mining, kernel methods, logic-based frameworks, persistent topology, neural architectures, and benchmark evaluations.
1. Temporal Graphs: Formal Representation and Motivation
The fundamental object of interest in GTC is the temporal graph, most commonly formalized as an ordered sequence of graph snapshots . Each represents the state of nodes and edges at discrete time step . In some settings, edges (and sometimes nodes) themselves carry timestamps, leading to an alternative view of the temporal graph as a set with .
This temporal perspective is indispensable in applications where dynamic phenomena—the evolution of information, contact, or dependency structures—bear on the target class: examples include outbreak detection in propagation networks, event prediction in communications, anomaly detection in cybersecurity, and the classification of functional states in evolving biological or financial systems (Wang, 2016, Oettershagen et al., 2019, Zola et al., 2021, Pritam et al., 14 Feb 2025).
2. Symbolic, Kernel, and Topological Approaches
Early frameworks for GTC depart from vector time series paradigms by explicitly encoding graph evolution:
- Graph-Shapelet Patterns: By mapping time-variant graphs to sequences of basic graph statistics (e.g. number of vertices plus edges at each timestamp), univariate time series are obtained and classical shapelet mining is applied. The corresponding subseries are then mapped back to subgraphs, and their temporal transformation sequences (recording ordered vertex and edge additions/deletions) are mined to find compact, highly discriminative "graph-shapelet patterns." Classification proceeds by edit-similarity of transformation sequences, formalized as:
where counts matching operations in the transformation sequences (Wang, 2016). This workflow captures both structural and temporal information ignored by static graph classifiers.
- Temporal Graph Kernels: By "lifting" static graph kernel methods (e.g., random walk or Weisfeiler-Lehman kernels) to the temporal domain, these methods count temporal walks—i.e., edge- and node-label sequences adhering to monotonic time ordering. Distinct strategies trade off preservation of temporal detail (via Directed Line Graph Expansion) against computation (via Reduced Graph Representation and Static Expansion). Stochastic variants sample walks for further scalability, backed by provable approximation bounds. Classification accuracy is significantly improved over static kernel analogs, especially in tasks modeling information or epidemic dissemination (Oettershagen et al., 2019).
- Persistent Homology and Topological Techniques: Persistent homology applied to δ-temporal motifs enables a fully topology-driven approach. For each edge , the so-called "average filtration" is computed over local temporal neighborhoods, constructing a filtered clique complex for PH computation. The resulting persistence diagrams, compared via kernels such as the Persistence Scale Space (PSS) kernel, yield high-accuracy classification even on node class-free datasets (Pritam et al., 14 Feb 2025). The method provides theoretical stability to time perturbations and supports multi-scale description of local and global temporal structure.
3. GTC as Graph-Based Sequence Supervision: CTC and Beyond
A strikingly different class of GTC methods generalizes the Connectionist Temporal Classification (CTC) loss to handle graph-based supervision and label ambiguities:
- Graph-Based Temporal Classification: Instead of a single target sequence, the training target is a lattice or graph (for example, a WFST) encoding all plausible label sequences (such as N-best pseudo-labels or alternative phoneme transcriptions). The GTC loss generalizes CTC to:
where maps alignments to output sequences by collapsing repeats and blanks. The forward-backward algorithm is adapted to compute probabilities over all valid alignments in the label graph. This dual alignment (temporal and label) allows models to benefit from richer, uncertain supervision in both speech recognition and phoneme recognition scenarios (Moritz et al., 2020, Grafé et al., 5 Sep 2025).
- Extended GTC for Label Transitions: The GTC-e extension further incorporates transition information (e.g., speaker identity on edges), modeling both node emissions and edge transitions in the loss. The network outputs distributions for both; the forward probability for path involves
coupling emission and transition probabilities. This enables multi-speaker ASR and any setting needing aligned sequence and meta-data prediction (Chang et al., 2022).
4. Graph Neural and End-to-End Deep Architectures for Temporal Graphs
Recent advances bring end-to-end differentiable models with explicit graph-temporal structure modeling:
- Dynamic Graph Learning: Methods such as TodyNet dynamically learn implicit, evolving adjacency matrices using learnable node embeddings, combine this with temporal graph transforms (e.g., passing information from previous to current time slots), and propagate features via dynamic GIN-style GNN layers. Temporal graph pooling with hierarchical clustering further enables extraction of global graph–temporal representations (Liu et al., 2023).
- Hierarchical Pooling for MTSC: MTPool constructs dynamic graphs per time slice based on inter-variable similarity, feeds features through a combination of temporal convolution and GNNs, and applies a variational (encoder–decoder based) graph pooling strategy to produce permutation-invariant, input-adaptive centroids for hierarchical graph coarsening. This enhances multivariate time series classification and enables principled ablations between GNN types, graph construction, and pooling architectures (Duan et al., 2020).
- Self-Supervised and Contrastive Learning: Approaches such as GraphTNC augment node embeddings by contrastively training encoders to produce similar representations for temporally and topologically neighboring windows and distinct ones for separated windows, assuming piecewise smoothness in the temporal evolution. Encoder modules combine single-step GNNs and temporal RNNs, and representations are used for downstream classification (Zhang et al., 2022).
- Comprehensive Benchmarking: Systematic benchmarks have analyzed 60 variants (combinations of node features, edge construction, and GNN architectures) on MTSC datasets, revealing the critical impact of node feature choice (with raw series and spectral/differential entropy features as options), the superiority of adaptive edge learning over fixed graph structures, and the relative impact of GNN architectures (e.g., MEGAT, STGCN) (Yang et al., 14 Jan 2025).
5. Logic-Based, Specification, and Interpretability-Focused Approaches
- Graph Temporal Logic (GTL): GTL formalizes temporal and spatial constraints on evolving graphs through a recursive logic over node and edge labels with temporal and spatial quantifiers. Classification proceeds by learning GTL formulas with minimal misclassification, while identification algorithms maximize information gain vis-à-vis a prior. Case studies have shown zero/low misclassification and full-coverage informative formulas in industrial and robotics settings. GTL enables the extraction of interpretable, human-readable rules supporting both analysis and system design (Xu et al., 2019).
- Pattern Extraction and Interpretable GTC: Methods such as MTS2Graph extract clusters of input segments (MHAPs) highly activating CNN neurons, organize their temporal relationships via evolution graphs, merge graphs across CNN layers based on receptive field overlap, and embed the resulting unified graph for interpretable classification. This approach maintains competitive performance while providing transparency into variable and temporal segment contributions (Younis et al., 2023).
6. Practical Applications and Future Directions
GTC methodologies have found applications in outbreak detection based on dynamic propagation structures (Wang, 2016), classification of social contagion processes (Oettershagen et al., 2019), motor imagery EEG decoding (Lim et al., 26 Jun 2025), phoneme recognition under G2P ambiguity (Grafé et al., 5 Sep 2025), and behavioral entity classification in temporal cybersecurity graphs (Zola et al., 2021). Emerging directions include node-class-free stable topological methods (Pritam et al., 14 Feb 2025), continual learning under evolving class spaces (Liu et al., 3 Mar 2025), cross-view contrastive fusion via GNN–Transformer hybrids (Sun et al., 22 Mar 2024), and real-time adaptive pipelines for streaming graph data (Gurevin et al., 2022).
Areas of open research highlighted include the improved preservation of fine-grained temporal detail during time series reduction, development of incremental and online pattern mining techniques, richer logic and expressive power in temporal specification languages, integration of advanced edge/graph construction strategies, combination with sequence-to-sequence models and multi-modal data streams, and extension to streaming, evolving, or partially observed temporal graphs.
7. Summary Table: Core GTC Methodologies
Methodology/Key Paper | Graph Representation | Temporal Modeling | Classification Approach |
---|---|---|---|
Graph-shapelet (Wang, 2016) | Sequence of graph snapshots | Edit transformation | Shapelet pattern extraction, edit sim. |
Temporal kernels (Oettershagen et al., 2019) | Temporal walks, time labels | Time-respecting walks | Random walk/WL kernels, SVM |
Persistent homology (Pritam et al., 14 Feb 2025) | δ-temporal motifs, filtrations | Motif evolution | PH diagrams, kernel SVM |
GTC (CTC-like) (Moritz et al., 2020, Grafé et al., 5 Sep 2025) | Label graphs (WFST/DAGs) | Sequence alignments | Extended CTC (forward-backward) |
GNN/MTSC frameworks (Duan et al., 2020, Liu et al., 2023) | Learned adjacency, dynamic GNNs | Graph message passing | End-to-end (pooling/classifier) |
Logic-based (Xu et al., 2019) | Node/edge-labeled graph traces | Logic over time/states | SAT of inferred GTL formula |
The table encapsulates several of the principal classes of GTC methodologies, their representation and temporal modeling strategies, and the classification decision layer.
References
- (Wang, 2016) Time-Variant Graph Classification
- (Oettershagen et al., 2019) Temporal Graph Kernels for Classifying Dissemination Processes
- (Duan et al., 2020) Multivariate Time Series Classification with Hierarchical Variational Graph Pooling
- (Moritz et al., 2020) Semi-Supervised Speech Recognition via Graph-based Temporal Classification
- (Xu et al., 2019) Graph Temporal Logic Inference for Classification and Identification
- (Gurevin et al., 2022) Towards Real-Time Temporal Graph Learning
- (Liu et al., 2023) TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time Series Classification
- (Younis et al., 2023) MTS2Graph: Interpretable Multivariate Time Series Classification with Temporal Evolving Graphs
- (Sun et al., 22 Mar 2024) GTC: GNN-Transformer Co-contrastive Learning for Self-supervised Heterogeneous Graph Representation
- (Yang et al., 14 Jan 2025) Benchmarking Graph Representations and Graph Neural Networks for Multivariate Time Series Classification
- (Pritam et al., 14 Feb 2025) Classification of Temporal Graphs using Persistent Homology
- (Liu et al., 3 Mar 2025) A Selective Learning Method for Temporal Graph Continual Learning
- (Lim et al., 26 Jun 2025) AGTCNet: A Graph-Temporal Approach for Principled Motor Imagery EEG Classification
- (Grafé et al., 5 Sep 2025) Graph Connectionist Temporal Classification for Phoneme Recognition