Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 178 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 56 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Temporal Knowledge Store Overview

Updated 11 November 2025
  • TKS is a repository that organizes time-annotated facts, enabling dynamic reasoning and inductive learning for temporal tasks.
  • It supports semantic and temporal alignment through approaches like graph neural encoders, sinusoidal time embeddings, and fast vector indexing.
  • The system enables efficient zero-shot inference and robust application in tasks such as temporal KG link prediction and temporal question answering.

A Temporal Knowledge Store (TKS) is a systematized, neural, and/or symbolic repository designed to index, organize, retrieve, and facilitate reasoning over facts annotated with explicit temporal information. TKSs underpin a spectrum of architectures in machine learning, most prominently in temporal knowledge graph link prediction, temporal question answering, and temporal aggregation in spiking neural networks (SNNs). Unlike static knowledge bases, a TKS provides mechanisms for representing temporal dynamics, supporting efficient and accurate inference—even in zero-shot or inductive settings—across tasks requiring semantic and temporal alignment.

1. Core Definitions and Formal Structure

A TKS is fundamentally constructed over a temporal knowledge graph (TKG). A TKG is a set of quadruples: G={(s,p,o,τ)  sV,pR,oV,τT}G = \{ (s, p, o, \tau)\ |\ s \in V,\, p \in R,\, o \in V,\, \tau \in T \} where VV is the set of entities, RR is the set of relations, and TT enumerates observed timestamps. In practice, this formalizes time-resolved factual statements (e.g., “(Obama, presidentOf, USA, 2009)”).

Within the storage architecture, the TKS augments the TKG with learned representations, a fast similarity search engine (e.g., FAISS with IVFPQ or HNSW), and (optionally) neural or symbolic indexing structures for downstream inference (Pan et al., 4 Jun 2025, Qian et al., 6 Nov 2025). Storage schemas are tailored to downstream use: for spatio-temporal reasoning (POSTRA), temporal question answering (PoK), or temporal aggregation (SNNs). Granularity is decoupled from design—supporting arbitrary intervals, snapshots, or pointwise time.

2. Methodological Foundations: Representation and Retrieval

Different TKS implementations adopt distinct representational paradigms:

a) Graph Neural Encoders and Sinusoidal Time Embeddings

POSTRA employs two core message-passing modules:

  • Relation-Relation Graph Encoder: For each query relation pp, a GNN propagates messages on a relation graph GrG_r capturing 2-hop temporal patterns (h2h, h2t, t2h, t2t). The embedding

rppL\mathbf r^L_{p|p}

encodes fine-grained local and structural dependencies.

  • Entity Encoder: Globally, for any triple (s,p,?,τis, p, ?, \tau_i), a GNN assigns an initial embedding (relation-conditioned at the head entity) and propagates information from all adjacent quadruples. Each GNN layer fuses entity, relation, and sinusoidal time embeddings: evsl+1=(w,q,τj)N(v)TMSG(ewsl,rq,gl+1(TE(τj)))\mathbf e^{l+1}_{v|s} = \sum_{(w,q,\tau_j)\in\mathcal N(v)} \mathrm{TMSG}(\mathbf e^l_{w|s},\,\mathbf r_q,\,g^{l+1}(\mathrm{TE}(\tau_j))) yielding embeddings compositional over both path structure and temporal sequence (Pan et al., 4 Jun 2025).

The temporal encoding

[TE(i)]2n=sin(ωni),[TE(i)]2n+1=cos(ωni)\left[\mathrm{TE}(i)\right]_{2 n} = \sin (\omega_n i),\quad \left[\mathrm{TE}(i)\right]_{2 n+1} = \cos (\omega_n i)

for timestamp index ii ensures granularity independence and facilitates transfer learning.

b) Neural Memory Bank with Contrastive Temporal Retrieval

For LLM-augmented TKGQA (Plan-of-Knowledge), the TKS is operationalized as a dense bank of fact embeddings: TKS={Ef  (s,p,o,t)F}, Ef=LMt(template(s,p,o,t))\text{TKS} = \{\mathbf E_f\ |\ (s,p,o,t)\in \mathcal{F}\},\ \mathbf E_f = LM_t(\text{template}(s,p,o,t)) Embeddings are created from templated, time-explicit natural language and stored for fast kNN search.

Contrastive fine-tuning (InfoNCE) on both semantic and temporal negatives ensures that question sub-objective embeddings Eq\mathbf E_{q} retrieve strictly temporally valid facts (Qian et al., 6 Nov 2025). Negative examples include time-incorrect, relation-incorrect, and both-incorrect quadruples.

c) Temporal Aggregation in Spiking Neural Networks

Here, each timestep produces a distinct sub-model; TKS refers to the ensemble view of the SNN

Q[t]out=ft(x,mt1),V[t]out=Softmax(Q[t]out)Q[t]^{out} = f_t(x, m_{t-1}),\quad V[t]^{out} = \mathrm{Softmax}(Q[t]^{out})

Aggregating and distilling across these outputs forms a procedure for sharing temporal knowledge—substantially improving per-step accuracy and enabling robust, short-latency inference (Dong et al., 2023).

3. Query and Inference Mechanisms

The operational workflow of a TKS centers on query-specific embedding computation and retrieval:

  • Inductive Link Prediction: POSTRA does not store entity/relation embeddings indexed by ID. Instead, all representations are dynamically computed (via GNN + sinusoidal time) at inference over the incoming TKG (Pan et al., 4 Jun 2025).
  • Dense Fact Retrieval: In retrieval-augmented LLM QA, all queries are decomposed (via plan induction from the LLM) into structured sub-objectives (Retrieve, Rank, Reason). Each "Retrieve" step queries TKS using the sub-objective embedding, returning candidates for re-ranking and multi-hop reasoning (Qian et al., 6 Nov 2025).

Query acceleration for large KGs is achieved by vector indexing: all Vs,p,τ\mathbf V_{s',p',\tau'} vectors or Ef\mathbf E_f embeddings can be indexed in FAISS/HNSW for sub-millisecond retrieval.

4. Scalability, Generalization, and Granularity

TKS frameworks are explicitly designed to be

  • Scalable: The parameter budget (embedding dimension, GNN layers) is decoupled from vocabulary size (V|V|, R|R|, or T|T|), enabling web-scale KG ingestion.
  • Granularity-Agnostic: Sinusoidal positional encodings allow switching temporal granularity (minutes, days, years) by modifying indices only, not by retraining parameters. Local temporal windowing (parameter kk) adapts to the ratio of fast/slow temporal changes (Pan et al., 4 Jun 2025).
  • Fully Inductive / Zero-Shot: Representational strategies are vocabulary-free; thus, POSTRA and PoK can immediately process completely novel entities, relations, and timestamps without retraining or fine-tuning. This foundation model capability marks a departure from prior transductive or semi-inductive TKG methods.

5. Empirical Validation and Use Cases

Temporal Knowledge Stores have been validated in multiple families of tasks:

POSTRA achieves strong zero-shot performance, transferring to new domains and time granularities with no retraining required (Pan et al., 4 Jun 2025). All representations are dynamically computed and the same scoring routine applies to any novel TKG.

b) Temporal KG Question Answering (TKGQA)

In the Plan-of-Knowledge paradigm, the TKS enables end-to-end retrieval-augmented reasoning workflows (Qian et al., 6 Nov 2025):

  • Each temporal fact is indexed as a dense embedding.
  • Query answering is decomposed into a chain of Retrieve, Rank, and Reason steps, with temporal consistency enforced by the contrastive retrieval loss.
  • On four TKGQA datasets, PoK with TKS achieves up to 56.0% improvement over previous SOTA systems in retrieval-augmented accuracy.

Examples include multi-hop temporal queries—e.g., resolving the earliest investigator after a given event or intersecting political office terms with congressional sessions—where TKS enables precise, temporally ordered fact chains for LLM reasoning.

c) Spiking Neural Network Training and Inference

TKS-based training views the SNN as a temporal ensemble, performing self-distillation across timesteps. This process

  1. Results in per-step accuracies such that inference with TtestTtrainT_{test} \ll T_{train} achieves only marginal performance loss.
  2. Drives Top-1 on DVS-CIFAR10 from 83.2% (Ttrain=10T_{train}=10) to ~75% (Ttest=1T_{test}=1), whereas the baseline SNN falls to ~50%.
  3. Significantly improves both accuracy and area under the risk-coverage curve (AURC), especially for temporally noisy or fine-grained image domains (Dong et al., 2023).

6. Architectural and Practical Considerations

A TKS built atop these principles comprises:

  • Parameter Store: GNN and MLP weights (for POSTRA), prompt and embedding layers (for PoK).
  • Fact Store: Explicit graph data with entity/relation/timestamp mappings and adjacency lists.
  • Index Engine: Vector index (e.g., FAISS, HNSW) for fast embedding-based nearest neighbor retrieval.
  • Temporal Decoupling: Sinusoidal encoding ensures compatibility with new timestamp formats or granularities.
  • On-the-fly Representation: All inference is direct; no static lookup tables or retraining is required.

A plausible implication is that TKS architectures form the basis for granularity-agnostic, inductive, and extensible reasoning systems—making them adaptable to real-world settings where temporal structure is heterogeneous and new facts, entities, relations, or timestamps arise dynamically.

7. Comparative Table of TKS Characteristics

Research Context TKS Storage Paradigm Query Paradigm
POSTRA (Foundation Model for TKGs) Parameterized GNN + TKG Dynamic message-passing + scoring
Plan of Knowledge (TKGQA + LLMs) Dense fact embedding memory Prompted retrieval → LLM Reasoning
Temporal SNNs (Knowledge Sharing) Temporal sub-model ensemble Distillation over steps/inference

Each instantiation provides mechanisms for temporal alignment, transferability, and efficient handling of large, dynamic knowledge graphs or event sequences.


Temporal Knowledge Stores unify neural, symbolic, and retrieval-based approaches to temporal facts, forming the methodological core for current state-of-the-art temporal reasoning over structured knowledge. Their common properties—inductive representation, decoupled granularity, and sub-second retrieval—position them as essential infrastructure for temporal reasoning in both foundation models and task-specific systems (Dong et al., 2023, Pan et al., 4 Jun 2025, Qian et al., 6 Nov 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Temporal Knowledge Store (TKS).