Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 36 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 461 tok/s Pro
Kimi K2 191 tok/s Pro
2000 character limit reached

Process-Based Data Filtering

Updated 18 August 2025
  • Process-based data filtering is a set of methods that use underlying stochastic processes and duality to enable efficient, tractable Bayesian inference.
  • It exploits process structures such as Markov chains, conjugacy, and duality, maintaining a finite mixture representation with each recursive update.
  • This approach applies to domains like signal processing, finance, and population genetics, offering polynomial-time computational benefits over traditional methods.

Process-based data filtering refers to a family of methods, algorithms, and modeling frameworks where data filtering and inference leverage underlying process structures—typically stochastic, temporal, or state-dependent—rather than relying solely on data-level or static heuristics. Originating in areas such as optimal filtering for hidden Markov models, process mining, and structured monitoring in both physical and cyber domains, process-based filtering enables recursive, tractable, and often interpretable updates to the information state, often translating the evolution of the true process (the “signal”) into manageable computations or explicit formulas. Approaches in this domain often exploit duality relations, Markov structures, conjugacy classes, or process-level symmetry, allowing significant gains in computational efficiency and robustness compared to unstructured methods.

1. Duality and Process Structure in Filtering

The core innovation introduced in "Optimal filtering and the dual process" (Papaspiliopoulos et al., 2013) is the formalization of duality between the signal (state) process and a specifically constructed auxiliary (dual) process. For hidden Markov models (HMMs) and related continuous-time processes, the signal XtX_t is linked by duality to a process with two components: a deterministic trajectory (following an ODE) and a multidimensional pure death process. The duality is with respect to a set of functions h(,m,θ)h(\cdot, m, \theta)—multiplicatively conjugate to the emission densities—and with respect to the reversible measure π(dx)\pi(dx).

Formally, the duality satisfies

$\E^{x}[h(X_t, m, \theta)] = \E^{(m, \theta)}[h(x, M_t, \Theta_t)]$

where MtM_t is the death-process component and Θt\Theta_t evolves by dΘt/dt=r(Θt)d\Theta_t/dt = r(\Theta_t).

This dual process construction enables the filtering distribution to be transferred onto the evolution of mixtures over a finite set parametrized by (m,θ)(m, \theta). Importantly, the prediction and update steps (for continuous and discrete observations, respectively) preserve this finite mixture structure, as the evolution in the dual space is explicit and conjugate.

2. Recursive Updates and Finite Mixture Structure

Given an initial filtering distribution in the class

F={h(x,m,θ)π(dx):mM,θΘ}\mathcal{F} = \{ h(x, m, \theta) \pi(dx) : m \in M,\, \theta \in \Theta \}

the recursive filtering process proceeds as follows:

  • Update step (Bayesian inversion): With an observation yy,

φy(ν)(dx)=mwmc(m,θ,y)normalizing constanth(x,t(y,m),T(y,θ))π(dx)\varphi_y(\nu)(dx) = \sum_m \frac{w_m c(m, \theta, y)}{\text{normalizing constant}} h(x, t(y, m), T(y, \theta)) \pi(dx)

  • Prediction step (propagation in time tt): ψt(ν)(dx)=n(mwmpm,n(t;θ))h(x,n,Θt)π(dx)\psi_t(\nu)(dx) = \sum_n \bigg( \sum_m w_m p_{m, n}(t; \theta) \bigg) h(x, n, \Theta_t) \pi(dx) {wm}\{w_m\} are the mixture weights; (t(y,m),T(y,θ))(t(y, m), T(y, \theta)) update parameters via conjugacy; c(m,θ,y)c(m, \theta, y) is a density-ratio coefficient (change of measure); and pm,n(t;θ)p_{m, n}(t; \theta) are explicit transition probabilities in the death process. This closure under update and prediction is highly non-trivial and enables explicit, recursive filtering computations.

3. Computational Complexity and Scalability

A key advantage of this process-based approach is that the filtering computation remains polynomial in the number of observations, since the number of mixture components grows only polynomially (rather than exponentially, as in general nonparametric filters or particle filters). Specifically, if the update function t(y,m)t(y, m) is additive, the support set Λn\Lambda_n after nn observations is bounded: Λn(1+dnK)K,dn=m0+iN(Yi)|\Lambda_n| \leq \left(1 + \frac{d_n}{K}\right)^K,\quad d_n = |m_0 + \sum_i N(Y_i)| Computationally, each recursion requires O(Λn2)O(|\Lambda_n|^2) operations, and total cost is polynomial up to the nnth observation. This is a significant scaling benefit for real-time or high-dimensional filtering tasks.

In cases where the dual is deterministic, as in linear Gaussian systems (the Kalman filter), the computational cost reduces to linear in the number of observations.

4. Generalization to Classical and Novel Models

This process-based duality framework is general enough to subsume several notable filtering models:

  • Kalman Filter: For linear Gaussian processes, the dual process is purely deterministic. The filtering distribution remains Gaussian; mean and covariance are updated deterministically and recursively, matching the traditional Kalman filter algorithm.
  • Cox-Ingersoll-Ross (CIR) Process: The signal is a positive-valued diffusion with gamma invariant measure. The dual is a novel gamma-type form constructed in the paper: the filtering distribution becomes a finite mixture of gamma distributions with parameters evolving via jump (death-process) and ODE steps.
  • Wright–Fisher Diffusion: In population genetics, the state variable lies on a simplex and evolves according to a multidimensional diffusion. The duality preserves Dirichlet structure, and the filter is a finite mixture of Dirichlet measures with recursive updates.

The framework unifies and extends classical filters but also provides explicit, computable filtering solutions for previously intractable models (e.g., the CIR duality).

5. Explicit Mathematical Formulations

Key formulas underlying the approach include:

Duality relation:

$\E^{x}[h(X_t, m, \theta)] = \E^{(m, \theta)}[h(x, M_t, \Theta_t)] \tag{1}$

Prediction operator:

ψt(h(x,m,θ)π(dx))=impm,mi(t;θ)h(x,mi,Θt)π(dx)(2)\psi_t(h(x, m, \theta)\, \pi(dx)) = \sum_{i \leq m} p_{m, m-i}(t; \theta) \, h(x, m-i, \Theta_t)\, \pi(dx) \tag{2}

Update step:

φy(h(x,m,θ)π(dx))=c(m,θ,y)h(x,t(y,m),T(y,θ))π(dx)p(y)(3)\varphi_y(h(x, m, \theta)\, \pi(dx)) = \frac{c(m, \theta, y) h(x, t(y, m), T(y, \theta))\, \pi(dx)}{p(y)} \tag{3}

with

c(m,θ,y)=fx(y)h(x,m,θ)h(x,t(y,m),T(y,θ))c(m, \theta, y) = \frac{f_x(y) h(x, m, \theta)}{h(x, t(y, m), T(y, \theta))}

and p(y)=Xfx(y)π(dx)p(y) = \int_X f_x(y)\, \pi(dx).

Generator of the dual process:

(Ag)(m,θ)=λ(m)ρ(θ)jmj(g(mej,θ)g(m,θ))+r(θ),θg(m,θ)(4)(A g)(m, \theta) = \lambda(|m|) \rho(\theta) \sum_j m_j (g(m - e_j, \theta) - g(m, \theta)) + \langle r(\theta), \nabla_\theta g(m, \theta)\rangle \tag{4}

$\frac{d}{dt} \Theta_t = r(\Theta_t) \tag{5}$

All recursions derive from these constructions; updating mixture weights, death process states, and deterministic parameters constitutes the filtering algorithm.

6. Unification, Limitations, and Extensions

By systematizing the construction of dual processes—comprising ODE and pure death process components—and explicit conjugate measures, the approach offers a unified view of both classical and more complex filtering scenarios. For models outside the polynomial closure, approximation or reduced representation may be needed, but the duality method gives clear guidance on which state/observation/parameter combinations admit tractable filtering.

A limitation is that not all continuous-time or discrete-time models admit a convenient dual structure or conjugacy, and the enumeration of mixture components—though polynomial—may still be prohibitive for large KK or highly informative observations. However, for most practical cases (especially low-to-moderate dimensionality and structured emission densities), the approach remains highly efficient.

The formalism is general enough that it provides a template for constructing new filters, given identification of an appropriate dual process and conjugacy relation, and is not restricted to real-valued or univariate processes.

7. Impact and Applications

Process-based data filtering via duality is applicable in settings where observations are made sequentially over hidden or partially observed stochastic processes. Applications include, but are not limited to:

  • Sequential Bayesian inference in signal processing and communications
  • Filtering and prediction in finance (interest-rate, volatility models with gamma, inverse-gamma, or Dirichlet priors)
  • Population genetics (Wright-Fisher processes and beyond)
  • Stochastic modeling in biological networks and systems biology

By delivering polynomial-time algorithms, explicit update rules, and tractable mixture representations, process-based data filtering represents a foundational paradigm for high-throughput, recursive Bayesian inference in process-driven domains.


In summary, process-based data filtering, as articulated through the duality methodology (Papaspiliopoulos et al., 2013), converts the evolution of complex stochastic models into explicit, low-dimensional computations on dual spaces. The structure-preserving property under prediction and update, polynomial scalability, and applicability to diverse classical and novel models collectively underpin its significance in the landscape of filtering theory and its applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)