Memory-Driven Random Walks
- Memory-driven random walks are stochastic processes where a walkerās future moves are influenced by an explicit history, resulting in non-Markovian behavior.
- They exhibit a range of diffusion behaviors, including superdiffusion, subdiffusion, trapping, and power-law relaxation stemming from various memory kernels.
- These processes underpin applications in network science, ecology, and biological systems, with models validated by both analytical and simulation studies.
Memory-driven random walks are a broad class of stochastic processes in which the walkerās next move is influenced by its past trajectory. These processes are fundamentally non-Markovian, as they depend on an explicit or implicit history-dependent ruleātypically via local, global, or resource-mediated memory effects. The resulting dynamics can include anomalous diffusion, long-time correlations, inhomogeneous or self-organized spatial patterns, nonergodicity, phase transitions, and critical slowing down. This paradigm has been formalized and studied across multiple mathematical frameworks, ranging from single-agent lattice walks to interacting walkers on complex, time-varying networks.
1. Mathematical Frameworks and Memory Mechanisms
Memory in random walks can be implemented by several distinct mathematical mechanisms, each giving rise to qualitatively different transport behaviors:
- Preferential Relocation and Reinforcement: The walker may intermittently āresetā or ārelocateā to one of its previously visited sites, with a selection probability proportional to the time spent at each site, the order of visits, or an explicit reinforcement weight (e.g., Pólya urn-type or activity-driven memory) (Mailler et al., 2018, Guerrero-Estrada et al., 15 Oct 2024, Masó-Puigdellosas et al., 2019).
- Weighted Memory Kernel: Each past location is retained in the agentās āmemoryā according to a temporal kernel μ(t) (e.g., exponential, power-law, or heavy-tailed), determining recall probabilities and thus shaping the spatial structure of memorized paths or the recurrence statistics of returns (Gnacik et al., 2018, Mailler et al., 2018).
- Step Correlation and Local Feedback: The probability of the next move can depend directly on counts of previous left/right steps, the last direction or location, or the time elapsed since previous visitsācommonly generating negative step correlations, directional anti-persistence, or edge-dependent feedback (Kazimierski et al., 2014, Choi et al., 2012, Budini, 2016, Turban, 2010).
- Multiple Memory Channels: At each step, a random subset of the previous n steps (sampled with or without replacement) is consulted, possibly with independent noise or reinforcement per channel. This generates interactions between channels, enhancing the complexity and richness of the phase diagram (Saha, 18 Jun 2025, Maulik et al., 12 Sep 2025).
- Memory Lapses and Partial Memory: A memory lapse parameter or random memory window introduces varying probabilities of āforgettingā or of sampling only certain time-intervals of the past, resulting in walks that interpolate between Markovian and fully memory-driven behavior (Dhillon et al., 22 Jan 2025, GonzĆ”lez-Navarrete et al., 2021).
2. Transport Properties, Scaling, and Phase Transitions
The interplay between memory and random walk dynamics induces a variety of anomalous behaviors:
- Classical ERW/Monkey Walk/Crowd Models: Memory causes the mean-square displacement (MSD) to deviate from linear-in-time scaling, producing superdiffusion (MSD ⼠tγ, γ > 1), subdiffusion (γ < 1), trapping (MSD saturates), or ultraslow (logarithmic) transport (Choi et al., 2012, Mailler et al., 2018, Hasnain et al., 2017, Budini, 2016).
- Phase Transitions and Critical Parameters: Varying the memory strength, the number of memory channels (n), or the reinforcement parameter (e.g., in multi-channel ERW), phase transitions occur between diffusive, superdiffusive, ballistic, and localized regimes. In the two-memory-channel ERW, four regimes ariseādiffusive, superdiffusive, ballistic, and mildly superdiffusiveāseparated by sharp critical points determined by the memory parameter p (Maulik et al., 12 Sep 2025, Saha, 18 Jun 2025).
- Relaxation and Self-Organized Criticality: On networks, preferential resets and reinforcement convert exponential relaxation into power-law decays, with exponents determined by the networkās spectral gap and the resetting (memory) strength. This anomalously slow relaxation underlies self-organized criticality (SOC) in networked systems, with robust avalanche statistics (Guerrero-Estrada et al., 15 Oct 2024, Jafari, 25 May 2025).
- Ergodicity Breaking and Inhomogeneous Diffusion: Time-averaged moments remain random variables even as t ā ā, signifying āweak ergodicity breaking.ā Individual trajectories appear as Markovian walks with distinct, trajectory-dependent transition rates drawn from a beta or related distributions, and ensemble averages mask this heterogeneity (Budini, 2016).
3. Analytical Results and Key Formulas
Exact and asymptotic results are central to understanding the impact of memory on macroscopic observables:
- Memory-Kernel Scaling and Diffusion: For preferential relocations with a uniform memory kernel, the position after n steps scales as X_n / ālog n ā N(0, Ļ2) (āslow diffusionā); with more strongly decaying memory, the scaling slows further (e.g., log log n). If the memory kernel is heavy-tailed (μ(x) ā¼ xα), the scaling can cross over to algebraic or even stretched exponential forms (Mailler et al., 2018, Gnacik et al., 2018).
- Moment Recursion and Explicit Expressions: Models with random or finite memory windows yield recursive or explicit combinatorial formulas for the mean increment and displacement, involving nested sums or products over the history (Dhillon et al., 22 Jan 2025, Turban, 2010).
- Multiple-Channel Phase Boundaries: For the two-channel ERW, precise phase boundaries are found at p = 11/16 (diffusive-superdiffusive), 7/8 (superdiffusive-ballistic), and an additional higher-order boundary at p ā 0.94. The limiting scaled position and fluctuations follow nontrivial limit laws, sometimes non-Gaussian (Maulik et al., 12 Sep 2025).
- Power-Law Relaxation in Networks: For a random walk with resetting parameter q on a network with adjacency matrix eigenvalues Ī»_ā, the slowest nonstationary mode decays as t-b_2(q), with b_2(q) = [(1-q)(1-Ī»_2)] / [1-(1-q)Ī»_2]. For any q > 0, relaxation is power-law and lacks a characteristic timescale (Guerrero-Estrada et al., 15 Oct 2024).
4. Application to Networks, Ecology, and Population Dynamics
Memory-driven random walks have been fruitfully applied to various domains:
- Network Centrality and Ranking: Memory alters the steady-state distribution and timescales of PageRank-like processes, random search, and discovery in dynamically evolving social and information networks. Nodesā ability to ācollect walkersā is diminished by stronger memory due to repeated interactions, as shown analytically in time-varying activity-driven models (Wang et al., 2020).
- Resource Foraging and Home Range Formation: Memory rules that reinforce cycles and revisitation pattern can lead to emergent āhome ranges,ā optimal foraging cycles, and phase transitions between unbounded and bounded movement (e.g., in frugivore-plant models with resource recovery times) (Kazimierski et al., 2014).
- Biological and Evolutionary Models: Mapping memory-driven random walks to multi-color, multi-draw Pólya urns allows direct modeling of interacting finite populations or species with memory-mediated, history-dependent reproduction or mutation. In the RWMC framework, the composition and diversity of the urn population directly correspond to the walker's state (Saha, 18 Jun 2025).
- Complex Transport and Epidemics: Long-range memory and non-Markovian resets slow down spreading on networks, implying that contagion, information, or stress propagation (e.g., blackout avalanches, neural activity) exhibit SOC-like avalanche statistics driven by revisit-reinforced walker dynamics (Jafari, 25 May 2025, Guerrero-Estrada et al., 15 Oct 2024).
5. Statistical Physics and Mathematical Connections
The theoretical analysis of memory-driven random walks is closely connected to several powerful constructs:
- Pólya Urn and Branching Processes: Many memory models are exactly mapped to urn schemes, allowing phase diagrams to be obtained by analyzing replacement matrices and branching trees (e.g., in the multi-channel ERW and reinforced walks with stable jumps) (Baur, 2019, Saha, 18 Jun 2025).
- Random Recursive Trees and Memory Trees: Preferential relocation processes correspond to random recursive tree growth, with the spatial scaling of the walker directly related to the height profile of the tree. The limiting position is determined by the random branch followed in the weighted tree (Mailler et al., 2018).
- Martingale Theory: Limit theorems for reinforced walks under memory lapses, or in local-memory models, use martingale central limit theorems, with the scaling and normalization determined by the specific memory parameters (GonzƔlez-Navarrete et al., 2021).
- Nonergodicity and Weak Ergodicity Breaking: Ballistic or sub-ballistic ensemble spreading can coexist with strictly random, realization-dependent time-averaged observables, governed by trajectory-specific transition rates drawn from nontrivial limiting distributions. This gives a kinetic realization of weak ergodicity breaking in superdiffusive systems (Budini, 2016).
6. Model Comparison and Regime Summary
| Model/Class | Memory Mechanism / Parameter | Regimes Observed |
|---|---|---|
| Preferential relocation (āmonkey walkā) | Kernel-weighted random resets | Logarithmic or slower diffusion; universal CLT (Mailler et al., 2018) |
| ERW with random/partial/multiple memory | Random or multiple recall of previous steps | Diffusive, superdiffusive, ballistic, trapping; phase transitions (Saha, 18 Jun 2025, Maulik et al., 12 Sep 2025, Dhillon et al., 22 Jan 2025) |
| Reinforced random walk with memory lapses | AD mixture of memory and Markov steps | Diffusive, superdiffusive, scaling laws via martingale methods (GonzƔlez-Navarrete et al., 2021) |
| Networks: activity-driven, resetting, reinforcement | Preferential revisit, link memory | Non-Markovian degree/occupation distributions, delayed mixing, SOC (Guerrero-Estrada et al., 15 Oct 2024, Wang et al., 2020, Jafari, 25 May 2025) |
| Random walks with local memory (RWLM/rotors) | Vertex ārotorā with retrospective update | Quenched invariance principle (Brownian scaling under stationarity) (Chan et al., 2018) |
| Mobility with memory (site-reinforced) | Jump probability via visit counts, impulse | Subdiffusion, trapping, logarithmic escape (Choi et al., 2012) |
| Environmental resource memory (foragers/plants) | Last-visit time + recovery, conservative/explorative | Emergent cycles/home ranges, exploration-exploitation tradeoff (Kazimierski et al., 2014) |
7. Notable Analytical and Simulation Results
- Analytical results: Closed-form expressions for moments, limiting position laws, and scaling exponents are available for a subset of models, especially when mapping to urn processes or employing martingale/coupling arguments.
- Simulation results: Monte Carlo studies are indispensable for models with complex kernels or high-dimensional memory, confirming the nonergodic behavior, phase boundaries, and scaling theories.
- Empirical Validation: Real-world data (e.g., Digg-Reply social networks, animal relocation paths) exhibit memory effects predicted by theoryāin particular, diminished node centrality for highly active nodes, more uniform stationary distributions, intrinsic delays, and slow mixing (Wang et al., 2020, Guerrero-Estrada et al., 15 Oct 2024).
Memory-driven random walks unify a range of history-dependent transport phenomena across statistical physics, probability, network science, and ecology. The rigorous results demonstrate how memory, reinforcement, and channel multiplicity induce anomalous scaling, criticality, nonergodicity, and phase transitions, with models validated by empirical data and exact mappings to urns, trees, and random environments. These findings are essential for understanding transport, search, information flow, and emergent criticality in complex, non-Markovian systems.