Temporal Knowledge Store Overview
- TKS is a repository that organizes time-annotated facts, enabling dynamic reasoning and inductive learning for temporal tasks.
- It supports semantic and temporal alignment through approaches like graph neural encoders, sinusoidal time embeddings, and fast vector indexing.
- The system enables efficient zero-shot inference and robust application in tasks such as temporal KG link prediction and temporal question answering.
A Temporal Knowledge Store (TKS) is a systematized, neural, and/or symbolic repository designed to index, organize, retrieve, and facilitate reasoning over facts annotated with explicit temporal information. TKSs underpin a spectrum of architectures in machine learning, most prominently in temporal knowledge graph link prediction, temporal question answering, and temporal aggregation in spiking neural networks (SNNs). Unlike static knowledge bases, a TKS provides mechanisms for representing temporal dynamics, supporting efficient and accurate inference—even in zero-shot or inductive settings—across tasks requiring semantic and temporal alignment.
1. Core Definitions and Formal Structure
A TKS is fundamentally constructed over a temporal knowledge graph (TKG). A TKG is a set of quadruples: where is the set of entities, is the set of relations, and enumerates observed timestamps. In practice, this formalizes time-resolved factual statements (e.g., “(Obama, presidentOf, USA, 2009)”).
Within the storage architecture, the TKS augments the TKG with learned representations, a fast similarity search engine (e.g., FAISS with IVFPQ or HNSW), and (optionally) neural or symbolic indexing structures for downstream inference (Pan et al., 4 Jun 2025, Qian et al., 6 Nov 2025). Storage schemas are tailored to downstream use: for spatio-temporal reasoning (POSTRA), temporal question answering (PoK), or temporal aggregation (SNNs). Granularity is decoupled from design—supporting arbitrary intervals, snapshots, or pointwise time.
2. Methodological Foundations: Representation and Retrieval
Different TKS implementations adopt distinct representational paradigms:
a) Graph Neural Encoders and Sinusoidal Time Embeddings
POSTRA employs two core message-passing modules:
- Relation-Relation Graph Encoder: For each query relation , a GNN propagates messages on a relation graph capturing 2-hop temporal patterns (h2h, h2t, t2h, t2t). The embedding
encodes fine-grained local and structural dependencies.
- Entity Encoder: Globally, for any triple (), a GNN assigns an initial embedding (relation-conditioned at the head entity) and propagates information from all adjacent quadruples. Each GNN layer fuses entity, relation, and sinusoidal time embeddings: yielding embeddings compositional over both path structure and temporal sequence (Pan et al., 4 Jun 2025).
The temporal encoding
for timestamp index ensures granularity independence and facilitates transfer learning.
b) Neural Memory Bank with Contrastive Temporal Retrieval
For LLM-augmented TKGQA (Plan-of-Knowledge), the TKS is operationalized as a dense bank of fact embeddings: Embeddings are created from templated, time-explicit natural language and stored for fast kNN search.
Contrastive fine-tuning (InfoNCE) on both semantic and temporal negatives ensures that question sub-objective embeddings retrieve strictly temporally valid facts (Qian et al., 6 Nov 2025). Negative examples include time-incorrect, relation-incorrect, and both-incorrect quadruples.
c) Temporal Aggregation in Spiking Neural Networks
Here, each timestep produces a distinct sub-model; TKS refers to the ensemble view of the SNN
Aggregating and distilling across these outputs forms a procedure for sharing temporal knowledge—substantially improving per-step accuracy and enabling robust, short-latency inference (Dong et al., 2023).
3. Query and Inference Mechanisms
The operational workflow of a TKS centers on query-specific embedding computation and retrieval:
- Inductive Link Prediction: POSTRA does not store entity/relation embeddings indexed by ID. Instead, all representations are dynamically computed (via GNN + sinusoidal time) at inference over the incoming TKG (Pan et al., 4 Jun 2025).
- Dense Fact Retrieval: In retrieval-augmented LLM QA, all queries are decomposed (via plan induction from the LLM) into structured sub-objectives (Retrieve, Rank, Reason). Each "Retrieve" step queries TKS using the sub-objective embedding, returning candidates for re-ranking and multi-hop reasoning (Qian et al., 6 Nov 2025).
Query acceleration for large KGs is achieved by vector indexing: all vectors or embeddings can be indexed in FAISS/HNSW for sub-millisecond retrieval.
4. Scalability, Generalization, and Granularity
TKS frameworks are explicitly designed to be
- Scalable: The parameter budget (embedding dimension, GNN layers) is decoupled from vocabulary size (, , or ), enabling web-scale KG ingestion.
- Granularity-Agnostic: Sinusoidal positional encodings allow switching temporal granularity (minutes, days, years) by modifying indices only, not by retraining parameters. Local temporal windowing (parameter ) adapts to the ratio of fast/slow temporal changes (Pan et al., 4 Jun 2025).
- Fully Inductive / Zero-Shot: Representational strategies are vocabulary-free; thus, POSTRA and PoK can immediately process completely novel entities, relations, and timestamps without retraining or fine-tuning. This foundation model capability marks a departure from prior transductive or semi-inductive TKG methods.
5. Empirical Validation and Use Cases
Temporal Knowledge Stores have been validated in multiple families of tasks:
a) Temporal Knowledge Graph Link Prediction
POSTRA achieves strong zero-shot performance, transferring to new domains and time granularities with no retraining required (Pan et al., 4 Jun 2025). All representations are dynamically computed and the same scoring routine applies to any novel TKG.
b) Temporal KG Question Answering (TKGQA)
In the Plan-of-Knowledge paradigm, the TKS enables end-to-end retrieval-augmented reasoning workflows (Qian et al., 6 Nov 2025):
- Each temporal fact is indexed as a dense embedding.
- Query answering is decomposed into a chain of Retrieve, Rank, and Reason steps, with temporal consistency enforced by the contrastive retrieval loss.
- On four TKGQA datasets, PoK with TKS achieves up to 56.0% improvement over previous SOTA systems in retrieval-augmented accuracy.
Examples include multi-hop temporal queries—e.g., resolving the earliest investigator after a given event or intersecting political office terms with congressional sessions—where TKS enables precise, temporally ordered fact chains for LLM reasoning.
c) Spiking Neural Network Training and Inference
TKS-based training views the SNN as a temporal ensemble, performing self-distillation across timesteps. This process
- Results in per-step accuracies such that inference with achieves only marginal performance loss.
- Drives Top-1 on DVS-CIFAR10 from 83.2% () to ~75% (), whereas the baseline SNN falls to ~50%.
- Significantly improves both accuracy and area under the risk-coverage curve (AURC), especially for temporally noisy or fine-grained image domains (Dong et al., 2023).
6. Architectural and Practical Considerations
A TKS built atop these principles comprises:
- Parameter Store: GNN and MLP weights (for POSTRA), prompt and embedding layers (for PoK).
- Fact Store: Explicit graph data with entity/relation/timestamp mappings and adjacency lists.
- Index Engine: Vector index (e.g., FAISS, HNSW) for fast embedding-based nearest neighbor retrieval.
- Temporal Decoupling: Sinusoidal encoding ensures compatibility with new timestamp formats or granularities.
- On-the-fly Representation: All inference is direct; no static lookup tables or retraining is required.
A plausible implication is that TKS architectures form the basis for granularity-agnostic, inductive, and extensible reasoning systems—making them adaptable to real-world settings where temporal structure is heterogeneous and new facts, entities, relations, or timestamps arise dynamically.
7. Comparative Table of TKS Characteristics
| Research Context | TKS Storage Paradigm | Query Paradigm |
|---|---|---|
| POSTRA (Foundation Model for TKGs) | Parameterized GNN + TKG | Dynamic message-passing + scoring |
| Plan of Knowledge (TKGQA + LLMs) | Dense fact embedding memory | Prompted retrieval → LLM Reasoning |
| Temporal SNNs (Knowledge Sharing) | Temporal sub-model ensemble | Distillation over steps/inference |
Each instantiation provides mechanisms for temporal alignment, transferability, and efficient handling of large, dynamic knowledge graphs or event sequences.
Temporal Knowledge Stores unify neural, symbolic, and retrieval-based approaches to temporal facts, forming the methodological core for current state-of-the-art temporal reasoning over structured knowledge. Their common properties—inductive representation, decoupled granularity, and sub-second retrieval—position them as essential infrastructure for temporal reasoning in both foundation models and task-specific systems (Dong et al., 2023, Pan et al., 4 Jun 2025, Qian et al., 6 Nov 2025).