Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 80 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 117 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Stochastic Conditioning in Regenerative Processes

Updated 9 October 2025
  • Stochastic conditioning is a framework that generalizes regeneration by defining breakpoints using both past-dependent and future-dependent events.
  • The methodology identifies regeneration times through intersecting events, ensuring i.i.d. or one-dependent cycles via stationarity and monotonicity conditions.
  • Applications span particle systems, infinite-bin models, and Harris ergodic chains, enabling robust limit theorems and convergence results in stochastic processes.

Stochastic conditioning refers to techniques and frameworks where the conditional structure of random processes, or their regenerative properties, depend both on future and past events. This architecture enables the systematic identification of regeneration times (random break points) in stochastic sequences under broad conditions, extending beyond the classical domain of stopping times, which are adapted only to the past. The notion is especially powerful in ergodic theory, stochastic particle systems, and models with memory or path dependence.

1. Fundamental Concepts of Stochastic Conditioning

Stochastic conditioning generalizes classical regeneration by allowing regeneration or break-points to be defined by events that depend on both the entire past and the (possibly infinite) future trajectory of a process. Formally, for a discrete-time stochastic process, two families of events are defined for every nZn \in \mathbb{Z}:

  • Past-dependent events: HnH_n, measurable with respect to information up to and including time nn.
  • Future-dependent events: FnF_n, measurable with respect to information from time nn onwards.

The regeneration events are then given by An=HnFnA_n = H_n \cap F_n. The process "regenerates" at times T0,T1,T_0, T_1, \ldots defined recursively by

T0=min{n0An occurs},Tk+1=min{n>TkAn occurs}.T_0 = \min\{n \geq 0 \mid A_n \text{ occurs}\}, \quad T_{k+1} = \min\{n > T_k \mid A_n \text{ occurs}\}.

Under suitable regularity (stationarity, strictly positive probability, and monotonicity) assumptions, the intervals (or cycles) between these regeneration times are shown to be i.i.d., or at least one-dependent under weaker separation conditions (Foss et al., 2012).

This framework subsumes various ad hoc constructions for identifying regenerative structures, bringing together approaches from renewal processes, Markov chains, and diskrete particle systems.

2. Stationarity and Monotonicity Conditions

The regenerative theory developed in (Foss et al., 2012) requires the following core conditions:

  • Stationarity: The future event sequence {Fn}nZ\{F_n\}_{n\in\mathbb Z} is strictly stationary, and each FnF_n occurs with positive probability.
  • Monotonicity (future event linking): For all n0n\geq 0, m>0m>0, there exist "linking" events En,n+mE_{n, n+m} (stationary in nn) such that

FnFn+m=En,n+mFn+m.F_n \cap F_{n+m} = E_{n, n+m} \cap F_{n+m}.

This ensures the conditioning on increasingly many future events does not corrupt the independence of cycles via accumulated dependencies.

A key distributional formula emerges:

P(Tk+1Tk=n)=a  P(E0,n),\mathbb P(T_{k+1}-T_k = n) = a\; \mathbb P(E_{0, n}),

for all n1n \geq 1 and some normalizing constant a>0a>0. Thus, the inter-regeneration time law is governed by the distribution of the linking events.

3. Extension to Functionals and One-Dependence

Beyond identifying i.i.d. cycles, the theory encompasses functionals of the process that depend on the increments between regeneration times. Specifically, for a sequence of process increments Ri(Xn+i,Xn)R_i(X_{n+i}, X_n), if after AnA_n occurs RiR_i depends only on the future—in the sense that Ri(Xn+i,Xn)1AnR_i(X_{n+i}, X_n)\cdot \mathbf{1}_{A_n} can be written as a measurable function gi(ξn+1,,ξn+i)1Ang_i(\xi_{n+1}, \dots, \xi_{n+i}) \cdot \mathbf{1}_{A_n}, with {ξn}\{\xi_n\} representing the i.i.d. driving noise—then the vectors

{Ri(XTj+i,XTj)i=1,,Tj+1Tj}\{ R_i(X_{T_j+i}, X_{T_j}) \mid i = 1, \ldots, T_{j+1}-T_j \}

are themselves i.i.d. across cycles (or one-dependent if minimal separation is imposed and functionals can depend on a finite "window" into the past).

This structure is essential for proving strong limit theorems (LLNs, CLTs) for additive functionals in stochastic processes with history- or future-dependent randomization.

4. Applications to Particle Systems and Infinite-Bin Models

Discrete-time Contact Processes:

  • Two-state contact process: The future event FnF_n is defined as the event that the process starting from the configuration at time nn "survives" indefinitely (e.g., the infection propagates to infinity). With An=FnA_n = F_n, the process's rightmost (or leftmost) infected site between regenerations is i.i.d., enabling direct proofs of laws of large numbers and central limit theorems for the infection front.
  • Three-state process with immunisation: Here, monotonicity is lost due to immunisation. The past event HnH_n (e.g., achieving a new record right endpoint) is intersected with FnF_n to restore a usable regenerative structure, which is then applied to paper growth and fluctuation properties in non-monotonic dynamics.

Infinite-bin Models:

  • In both discrete and continuous space, the process is modeled as particles jointly associated to "bins", updated by random selection and displacement. Conditioning on appropriate future events (e.g., finite projections of configurations "reset") defines regeneration events after which the system's subsequent evolution is independent of the past. This enables analysis of convergence of finite-dimensional projections and the derivation of limit theorems for growth processes in both discrete and continuous bin models.

5. Relation to Harris Ergodicity and Markov Chain Regeneration

A significant unification arises by casting classical Harris ergodicity in Markov chains within this framework. Harris recurrence is characterized by the existence of a "small set" VV such that upon hitting VV (past event HnH_n), a random coin flip (future event FnF_n) with positive probability pp "regenerates" the process. The classic coupling argument with randomization is realized as An=HnFnA_n = H_n \cap F_n, fitting precisely into the general theory of stochastic regenerative conditioning.

For m=1m = 1, a Harris-ergodic chain is representable as a stochastic recursion driven by an i.i.d. noise sequence, with regeneration emerging as a particular instance of the general theory.

6. Mathematical and Structural Synthesis

The theoretical synthesis is summarized by several central formulas:

Formula Interpretation
An=HnFnA_n = H_n \cap F_n Defining regeneration events via past and future events
P(Tk+1Tk=n)=a  P(E0,n)\mathbb{P}(T_{k+1}-T_k = n) = a\;\mathbb{P}(E_{0,n}) Inter-regeneration time in terms of linking event EE
{Ri(XTj+i,XTj)}\{R_i(X_{T_j+i}, X_{T_j})\} i.i.d. i.i.d. structure on functionals over regeneration cycles

These formal identities clarify the regenerative structure even in non-Markovian or non-monotonic processes, provided suitable monotonicity and stationarity conditions on the event families are met.

7. Impact and Generalizations

The unified framework presented in (Foss et al., 2012) generalizes previous application-specific constructions of regeneration, providing a systematic method for incorporating both future- and past-dependent information. This theory has yielded new results and deeper structural insight in:

  • Particle system models, particularly for analyzing the speed, growth, and fluctuations of infection fronts in classical and immunising contact processes.
  • Infinite-bin and many-particle models, for which new coupling and convergence results are derived, even under relaxed or number-theoretic conditions.
  • The theory of regenerative Markov processes and the structure of Harris ergodic chains, extending classical regeneration to include additional randomization through future events.

This formalism underpins rigorous limit theorems in the paper of stochastic processes with both infinite memory and infinite anticipation, and invites new classes of models—where regeneration is defined through global path functionals—to be treated within a single probabilistic framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Stochastic Conditioning.