Papers
Topics
Authors
Recent
Search
2000 character limit reached

Piecewise-Deterministic Markov Processes

Updated 9 March 2026
  • Piecewise-Deterministic Markov Processes (PDMPs) are continuous-time models that combine deterministic flows with random jumps, characterized by a semiflow, jump rate, and transition kernel.
  • They provide a robust framework for applications in biological systems, Bayesian computation, and optimal control, supported by strong semigroup properties and convergence guarantees.
  • Recent advances focus on improving statistical estimation methods and developing high-order numerical simulation techniques, extending PDMP theory to infinite-dimensional settings.

A piecewise-deterministic Markov process (PDMP) is a continuous-time stochastic process—originally formalized by M. H. A. Davis in the early 1980s—that evolves via deterministic flows between random, instantaneous jumps, with the Markov property at the level of the full process. The canonical structure comprises a deterministic semiflow, a jump rate (hazard function), and a post-jump transition kernel. PDMPs form a broad framework for modeling non-diffusive phenomena, including stochastic hybrid systems, biological switching, Bayesian computation, and optimal control in both finite and infinite dimensions.

1. Formal Structure and Definition

A PDMP on a measurable state space EE is specified by:

  • Deterministic flow: Φ:R+×EE\Phi:\mathbb{R}_+ \times E \rightarrow E, typically given by an ODE solution tΦ(tx)t \mapsto \Phi(t|x) with the semigroup property Φ(0x)=x\Phi(0|x) = x, Φ(t+sx)=Φ(sΦ(tx))\Phi(t+s|x) = \Phi(s|\Phi(t|x)).
  • Jump rate: λ:ER+\lambda:E \rightarrow \mathbb{R}_+, a non-negative, possibly state-dependent function governing the exponential holding time until the next random event.
  • Transition kernel: Q(x,dy)Q(x,dy), a Markov kernel specifying the distribution of the next state upon a jump from xx.

The process evolves as follows: from X(0)=xX(0)=x, the state follows the deterministic flow Φ(tx)\Phi(t|x) until the first jump time T1T_1, distributed according to the survival function

P(T1>tX(0)=x)=exp(0tλ(Φ(sx))ds).P(T_1>t|X(0)=x) = \exp\left( -\int_0^t \lambda(\Phi(s|x))\,ds \right).

At t=T1t=T_1, the process jumps to a new state drawn from Q(Φ(T1x),)Q(\Phi(T_1|x),\cdot), and the construction repeats. In the presence of a boundary, the flow may exit the interior deterministically, in which case a forced jump is executed (Azaïs et al., 2016).

The infinitesimal generator for smooth test functions f:ERf:E\rightarrow\mathbb{R} is

Af(x)=f(x)b(x)+λ(x)E[f(y)f(x)]Q(x,dy),\mathcal{A} f(x) = \nabla f(x)\cdot b(x) + \lambda(x)\int_E [f(y) - f(x)] Q(x,dy),

where b(x)b(x) is the vector field underlying the deterministic dynamics (Rudnicki et al., 2015, Fearnhead et al., 2016, Bertazzi et al., 2021).

2. Semigroup, Generator, and Analytical Foundation

Given the above specification, PDMPs induce a Feller Markov semigroup {Pt}t0\{P_t\}_{t\geq0} on C0(E)C_0(E), the space of continuous functions vanishing at infinity. Under mild Lipschitz and non-expansiveness conditions for the flow and jumps, and integrability for the rate, the semigroup is well-posed and strongly continuous: Ptf(x)=Ex[f(Xt)],P_t f(x) = \mathbb{E}_x[f(X_t)], with strong continuity and preservation of C0(E)C_0(E). The extended generator A\mathcal{A} is defined via martingale problems: for fDom(A)f \in \mathrm{Dom}(\mathcal{A}), the process

Mtf=f(Xt)f(X0)0t(Af)(Xs)dsM_t^f = f(X_t) - f(X_0) - \int_0^t (\mathcal{A}f)(X_s) ds

is a local martingale (Holderrieth, 2019).

The main analytical results indicate that Cc(E)C_c^\infty(E), the space of infinitely differentiable functions with compact support, forms a core for the generator: for any ff in the domain, there exists a sequence in Cc(E)C_c^\infty(E) converging uniformly (with derivatives and generator applied) to ff and Af\mathcal{A}f (Holderrieth, 2019).

In the infinite-dimensional (Hilbert space) setting, the extended generator is likewise well-defined under local boundedness of the intensity and non-expansiveness of the jump kernel, enabling analysis of ergodicity, convergence, and approximation (Dobson et al., 2022).

3. Statistical Estimation and Inference for Jump Dynamics

Recent developments provide constructive methods for estimating key quantities such as the jump rate λ\lambda in discrete-state PDMPs. Azaïs–Genadot (Azaïs et al., 2016) introduced a new characterization theorem for the jump rate under discrete transitions: for each state xx, λ(x)\lambda(x) can be reconstructed exactly as a sum over orthonormal basis expansions of a double-marked rate, weighted by the Markov chain's transition matrix. Explicitly, with expansion coefficients θp(x,y)\theta_p(x, y) and basis functions BpB_p,

λ(x)=p=0Bp(0)yER({y}x)θp(x,y),\lambda(x) = \sum_{p=0}^{\infty} B_p(0) \sum_{y \in \mathcal{E}} R(\{y\}|x)\,\theta_p(x, y),

where E\mathcal{E} is the finite state grid, and R({y}x)R(\{y\}|x) the empirical transition probabilities of the post-jump chain.

A nonparametric estimator is constructed via:

  • Empirical estimation of the transition matrix.
  • Nelson–Aalen-type estimator for the cumulative double-marked rate.
  • Orthonormal series expansion truncated at index τn\tau_n, tuned with respect to sample size.
  • Uniform consistency in probability of the estimator λ^n(x)\widehat{\lambda}^n(x) is established as nn \to \infty, under irreducibility and mild regularity assumptions (Azaïs et al., 2016).

This framework generalizes to a wide class of PDMPs via the theory of additive and multiplicative intensities, supporting inference under minimal parametric assumptions (Azaïs et al., 2016).

4. Numerical Methods: Exact and Approximate Simulation

Numerical simulation of PDMPs is central for both theoretical analysis and practical application.

Exact Simulation

For explicit forms of Φ\Phi and computable integrals of λ\lambda, exact pathwise simulation is achieved by:

  1. Solving for the next jump time τ\tau via the implicit equation

exp(0τλ(Φ(sx))ds)=U,\exp\left(-\int_0^{\tau} \lambda(\Phi(s|x)) ds\right) = U,

with UUnif(0,1)U \sim \mathrm{Unif}(0,1).

  1. Evolving the system under the ODE to time τ\tau.
  2. Sampling the new state from QQ.

This procedure forms the backbone of algorithmic simulation for PDMPs (Riedler, 2011, Bertazzi et al., 2021).

Numerical Approximation and Pathwise Convergence

When explicit integration is infeasible, high-order methods are employed:

  • ODE solvers with dense output (e.g., continuous Runge–Kutta).
  • Event detection formulated as a hitting-time problem for an augmented ODE with a random threshold.
  • The main convergence theorem establishes that the pathwise error (in maximum norm over all jump times and states) converges almost surely at the rate of the integrated deterministic method:

maxnN(T)X(tn)Xh(tnh)=O(hp) a.s.,\max_{n \le N(T)} |X(t_n) - X_h(t_n^h)| = O(h^p)\ \text{a.s.},

where hh is the ODE solver step, pp its order, and tnht_n^h the numerically obtained jump times (Riedler, 2011, Bertazzi et al., 2021).

High-order schemes for the flow, jump rates, and transition kernel can be rigorously analyzed, with pathwise convergence and weak error rates provided under global Lipschitz and boundedness assumptions (Bertazzi et al., 2021).

5. Applications in Computational Statistics, Control, and Population Models

PDMPs are foundational in several modern applications:

  • Markov Chain Monte Carlo (MCMC): The Zig-Zag process, Bouncy Particle Sampler, and Hamiltonian piecewise-deterministic MCMC are PDMPs on Rd\mathbb{R}^d or its product with velocity variables, enabling efficient, non-reversible, continuous-time sampling from Bayesian posteriors and complex distributions. These samplers exploit deterministic flows (linear or Hamiltonian), state-dependent jump rates, and explicit velocity flip or bounce kernels, yielding ergodic algorithms with control-variate and stochastic gradient enhancements for big data scaling (Fearnhead et al., 2016, Vanetti et al., 2017, Terenin et al., 2018, Andrieu et al., 2018, Dobson et al., 2022).
  • Optimal Control and Risk-sensitive Decision Processes: PDMPs underpin continuous-time Markov decision processes with deterministic dynamics between random transitions, supporting both classical and risk-sensitive (exponential utility) criteria. Dynamic programming equations (PDEs and quasi-variational inequalities) and value iteration schemes have been established, guaranteeing the existence of stationary optimal policies under broad regularity (Bandini, 2015, Guo et al., 2017, Gee et al., 2024).
  • Biological and Population Dynamics: PDMPs encode gene expression, cell-cycle progression, population-structured models, and neural activity through deterministic ODEs coupled to stochastic events (e.g., switching, division, spikes). The theory guarantees ergodicity and convergence to stable distributions, conditioned on reachability and minorization conditions (Rudnicki et al., 2015).
  • Systems with Random PDE Dynamics: Infinite-dimensional PDMPs, as driven by scalar conservation laws or functional ODEs, extend the analytical and simulation framework for complex systems combining deterministic evolution with random structural changes (Knapp, 2019, Dobson et al., 2022).

6. Theoretical Developments: Generators, Ergodicity, and Core Results

Recent rigorous results confirm:

  • Feller and strong continuity of the semigroup: Under Lipschitz and growth control on the deterministic and jump maps (Holderrieth, 2019, Dobson et al., 2022).
  • Cores for generator: Cc(E)C_c^\infty(E) is a core for the infinitesimal generator of the semigroup in PDMP-driven MCMC, supporting precise martingale characterizations, invariance proofs, and spectral-gap (hypocoercivity) estimates (Holderrieth, 2019, Andrieu et al., 2018).
  • Spectral scaling in high dimensions: For randomized Hamiltonian, Bouncy Particle, and Zig-Zag samplers, dimension-dependent scaling of the convergence rate is controlled via explicit dependence on the refreshment, geometry, and Poincaré inequalities of the target (Andrieu et al., 2018).
  • Infinite-dimensional extension: Abstract analysis confirms exponential convergence for infinite-dimensional Boomerang samplers, with finite-dimensional approximations converging uniformly (Dobson et al., 2022).

7. Extensions, Limitations, and Open Directions

While PDMPs provide a versatile and analytically robust class of Markov processes, several limitations exist:

  • For continuous-state transition kernels QQ, direct analogues of some discrete techniques (e.g., the spectral estimator in (Azaïs et al., 2016)) require new tools.
  • Uniform consistency of statistical estimators is established, but explicit finite-sample rates, central limit theorems, and adaptive selection of tuning parameters remain open (Azaïs et al., 2016).
  • Infinite-dimensional PDMPs introduce additional measurability, integrability, and tightness challenges, particularly when event rates are unbounded in the underlying Hilbert space (Dobson et al., 2022).
  • The occasional observation framework for PDMPs with partial, asynchronous, or delayed state information (e.g., "occasionally observed" PDMPs) yields dynamic programming equations on the belief space, but general solutions face the curse of dimensionality. Under stronger assumptions, computational tractability can be recovered with complexity linear in the number of modes (Gee et al., 2024).

PDMPs continue to be a focal point for methodological and applied research, at the interface of stochastic processes, numerical analysis, large-scale computation, and control theory.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Piecewise-Deterministic Markov Processes.