Memory in Time-Respecting Paths
- Memory in time-respecting paths are defined as temporal sequences where future steps depend on previously visited nodes, introducing non-Markovian dynamics.
- The methodology employs formal models, maximum likelihood estimation, and synthetic generative frameworks to quantify memory effects in temporal graphs.
- Applications include improved temporal reachability computations, hardware implementations with memristors, and enhanced diffusion analyses in complex networks.
Memory in time-respecting paths refers to the phenomenon whereby the future evolution of a path on a temporal network—even one that is strictly time-respecting in the sense of chronology and edge-availability—depends not only on the current node and time but also on the sequence of nodes previously visited. This introduces non-Markovian structure into node sequences, strongly affecting processes such as diffusion, navigation, and information flow on temporal graphs.
1. Fundamentals of Time-Respecting Paths
A temporal network is defined as a sequence of node sets and time-stamped edges , where each contact occurs at a discrete time and may carry a weight (Guerrini et al., 21 Nov 2025, Sahasrabuddhe et al., 14 Jan 2025). A time-respecting path (TRP) of length is then a sequence of node–time pairs satisfying the following:
- Chronological ordering: for all ;
- Adjacency: ;
- Non-backtracking: (and, by convention, ).
Two size metrics are relevant: path-length (number of nodes) and path-duration .
Time-respecting paths generalize classical (static) paths by requiring that each step is both valid at the temporal level and avoids immediate cycles, embedding sequential constraints that make temporal graph processes more intricate than their static counterparts (Sahasrabuddhe et al., 14 Jan 2025).
2. Formal Models of Memory in Paths
Memory in time-respecting paths quantifies the extent to which the transition probabilities at each step depend on previously visited nodes, not just the current state. Two broad modeling regimes emerge:
- First-order (Markov-0): Transition probabilities depend only on the present node.
- Second-order (Markov-1) and higher: Next-node probabilities depend on the current node and one or more steps of path history.
A general, empirically-validated model defines a "memory-only" (MEM) transition with parameter (the memory strength) and memory horizon :
where is the set of last distinct nodes visited (excluding and ). With probability the walk returns to a node in memory, otherwise to a random allowed node (Guerrini et al., 21 Nov 2025).
When community structure is present, a MEM+SBM model applies stochastic block model (SBM) affinities as weights on the non-memory term, replacing the uniform distribution with transition probabilities proportional to community affinities (Guerrini et al., 21 Nov 2025).
3. Empirical Quantification and Statistical Testing
To estimate the memory parameter and related structure parameters from real data, a maximum likelihood estimation (MLE) is performed over observed TRP sequences. This likelihood accounts for the two possible transition sources (memory and random), and its maximization yields the most probable for the dataset (Guerrini et al., 21 Nov 2025).
Validation requires distinguishing genuine memory from artifacts produced by degree heterogeneity or community structure. Two classes of time-shuffled null models are employed:
- Erdős–Rényi (ER) null: Randomizes each graph snapshot independently, preserving instantaneous density but destroying memory and structure.
- SBM null: Each snapshot is an independent SBM draw, matching community affinities and densities but again erasing memory effects.
Empirical studies across multiple proximity datasets (including high-resolution SocioPatterns data) find that extracted values are consistently and significantly larger for real data than for these nulls, indicating statistically robust memory at the level –$0.3$, with null distributions sharply peaked near zero (Guerrini et al., 21 Nov 2025).
4. Generative and Analytical Models for Memory
A class of generative synthetic temporal graphs with tunable memory weight and memory-horizon enables controlled paper of path memory effects. At each time , a lazy transition matrix (allowing self-loops) is computed, and the -step reachability matrix encodes multi-step connectivity.
Edge probabilities are a convex combination of uniform randomness and memory-driven reachability:
where is a normalization (Guerrini et al., 21 Nov 2025). Empirically, the inferred parameter increases monotonically with , enabling direct control over memory effects.
Analytical diffusion analysis, using linear processes such as
demonstrates that increased memory in TRPs leads to systematically slower network mixing, as measured by reduced entropy of the state distribution after steps. Walkers recurrently returning to recently visited nodes reduces global exploration efficiency (Guerrini et al., 21 Nov 2025).
5. Higher-Order and Concise Memory Network Representations
Second-order Markov representations explicitly encode memory by constructing state nodes for all observed edges (), with transitions between these state nodes defined by observed triplet frequencies (Sahasrabuddhe et al., 14 Jan 2025).
A concise network model employs Bayesian regularization for robust estimation, decomposes transition probabilities via convex non-negative matrix factorization (NMF), and selects the number of memory "modes" by flow overlap:
where are the posterior means of the transition probabilities, and is their rank- NMF approximation (Sahasrabuddhe et al., 14 Jan 2025).
This methodology reveals that a small number of latent state nodes per physical node suffices to accurately capture dominant memory effects in large-scale systems, as demonstrated in both synthetic and empirical cases (e.g., air-traffic, social information flow) (Sahasrabuddhe et al., 14 Jan 2025).
6. Algorithmic and Data Structure Aspects
Efficient computation of temporal reachability, which underpins path-based memory analysis, leverages compact data structures for the timed transitive closure (TTC), representing all minimal journeys between each node pair:
- Traditional approach: Maintains balanced BSTs storing minimal non-nested intervals. Each insertion and query costs per operation (interval), but space usage scales as bits.
- Compact approach: Represents each BST with two dynamic bit-vectors of length . This synchronization enables per-operation time, with a total space requirement of bits—a factor of reduction. For temporally dense graphs, empirical measurements show – lower memory and up to faster updates in the new structure, though the BST may retain advantages in extremely sparse regimes (Brito et al., 2023).
A plausible implication is that such data structures are critical for large-scale empirical studies of memory effects in real-world temporal networks, enabling scalable extraction and enumeration of TRPs in massive contact sequences.
7. Physical Architectures and Tropical Algebra Perspectives
Physical implementation of temporal memory—in particular, for time-respecting path-stitching, as required in hardware race-logic—leverages analog memristor-based storage cells. These devices map arrival time to resistance in a controlled analog manner, facilitating high-speed, energy-efficient storage and replay of timing wavefronts (Madhavan et al., 2020).
Key mapping of race logic primitives to tropical algebra semantics (min, + semiring):
- Delay elements correspond to tropical multiplication (),
- First-arrival (OR) gates correspond to tropical addition (min),
- "Inhibit" gates apply selective masking (min and extra transformation).
A temporal state machine composed of three memory banks orchestrates imperative computations via programmable state transitions, breaking the invariance of pure-race logic and allowing for general-purpose path-computing architectures. Implementations in 180 nm CMOS and experimental memristor arrays demonstrate per-iteration Dijkstra costs of for a 32-node graph, with core energy efficiency reaching (billion edges per joule). Scaling projections to 14 nm and improved memristors suggest order-of-magnitude gains (Madhavan et al., 2020).
The direct consequence is that temporal memory at the physical layer supports both the algorithmic and structural phenomena observed in memory-rich time-respecting path dynamics, leading to hardware acceleration opportunities for path-centric computations.
References
- "Modeling memory in time-respecting paths on temporal networks" (Guerrini et al., 21 Nov 2025)
- "Concise network models of memory dynamics reveal explainable patterns in path data" (Sahasrabuddhe et al., 14 Jan 2025)
- "Dynamic Compact Data Structure for Temporal Reachability with Unsorted Contact Insertions" (Brito et al., 2023)
- "Temporal State Machines: Using temporal memory to stitch time-based graph computations" (Madhavan et al., 2020)
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free