Parallel-in-Time Kalman Filters
- Parallel-in-Time Kalman Filters are advanced algorithms that recast sequential Bayesian estimation into parallel prefix-sum operations for state-space models.
- They reduce the classical O(N) time complexity to O(log N) by leveraging associative scan primitives, significantly speeding up filtering and smoothing tasks.
- These methods have been applied to Gaussian Process regression, probabilistic ODE solvers, and integrated measurement systems, achieving substantial computational speedups on modern hardware.
Parallel-in-time Kalman filters are temporally parallel algorithms for recursive Bayesian estimation in dynamical systems modeled by state-space equations. Their essential advance is the reduction of algorithmic span—critical path length—from classical linear-in-time () update sequences to logarithmic () depth using associative scan primitives. This temporal parallelism, distinct from spatial or data parallelism, enables effective exploitation of modern hardware such as GPUs and multi-core CPUs for filtering and smoothing tasks in linear and certain nonlinear models, including temporal Gaussian processes, probabilistic ODE solvers, and integrated measurement systems.
1. State-Space Models and Classical Kalman Filtering
Dynamical systems for filtering are generally modeled as discrete- or continuous-time state-space equations: The classical Kalman filter consists of sequential predict and update recurrences, with the Rauch–Tung–Striebel (RTS) smoother extending the analysis backward in time. For time steps, classical filtering and smoothing intrinsically require time due to temporal dependency. This sequential bottleneck limits throughput, especially for large-scale or high-frequency problems (Corenflos et al., 2021).
2. Associative Scan Formulation for Temporal Parallelization
The core insight enabling parallel-in-time filtering is to recast the temporal recurrences as associative operations:
- At each time step , construct an "element" (e.g., ) encoding the update effect.
- Define a binary operator such that
is associative and produces the filter mean/covariance at time .
For the standard Kalman filter, the operator is derived such that composing these elements using parallel prefix-sum (scan) primitives yields the exact posterior statistics for all time points (Corenflos et al., 2021, Särkkä et al., 13 Nov 2025). This applies to both filtering and smoothing; for smoothing, reverse scans of appropriately constructed elements compute the backward information flows.
The key associative update for filtering is
where all matrix operations can be applied in parallel if the underlying hardware supports concurrent execution (Corenflos et al., 2021, Särkkä et al., 13 Nov 2025).
3. Algorithmic Primitives, Complexity, and Implementation
Parallel prefix-sum algorithms implement the scan efficiently:
- Blelloch scan: Two-phase up-sweep and down-sweep producing work and span.
- Ladner–Fischer scan: In-place tree circuit, work and span, minimal memory overhead.
- Hillis–Steele scan: Simpler, work and span, generally suboptimal for large .
On GPUs and multicore systems, each time step is assigned to a thread or core. Matrix operations per element dominate the work, scaling as for state dimension . Two passes (filtering and smoothing) complete all moments in parallel time, with total work and synchronizations (Särkkä et al., 13 Nov 2025).
GPU/CPU implementation involves:
- Kernel for local element construction
- Kernel(s) for scan operations, with synchronization after each tree merge layer
- Optional backward scan for smoothing
- Memory and computational graphs (e.g., TensorFlow, PyTorch) to enable automatic differentiation and learning (Corenflos et al., 2021)
4. Extensions and Applications
Parallel-in-time Kalman filtering has been adapted to:
- Gaussian Process regression in state-space form: Provides inference for large temporal GPs. Experiments show >10× speedups vs sequential approaches for (Corenflos et al., 2021).
- Probabilistic numerical ODE solvers: IEKS, adapted for prefix-scan, supports parallel filtering and smoothing for latent SDE models; speedup up to 100× observed for (Bosch et al., 2023).
- Slow-Rate Integrated Measurement systems: SRTM models—integrated measurements over intervals—are restructured using associative operators; empirical GPU speedups of 8–10× for reported (Yaghoobi et al., 2024).
- Block-partitioned, space-time parallel KFs: Domain decomposition with overlap-regularization enables parallel Kalman filtering across both time and spatial subdomains with quadratic scaling in domain count (D'Amore et al., 2023).
- Orthogonal transformation-based parallel smoothers: QR and selective inversion algorithms provide numerically stable, highly parallel smoothing, with up to 47× speedup over sequential RTS methods on 64-core ARM/Intel systems (Gargir et al., 17 Feb 2025).
- Two-filter parallel smoothers: Forward and backward filters via prefix-scans on separate GPUs deliver up to 1.5× speedup over single-GPU smoothing (Särkkä et al., 13 Nov 2025).
5. Numerical Stability and Limitations
Numerical stability varies by method:
- Pure associative scan methods, per Särkkä & García-Fernández, offer no direct backward stability guarantees; accumulating rounding errors across parallel merges is possible, particularly for long chains or high (Gargir et al., 17 Feb 2025).
- Householder QR-based parallel smoothers are conditionally backward-stable, with error bounded by , as in conventional QR methods. Selective inversion (SelInv) can be used to extract covariance blocks efficiently and stably (Gargir et al., 17 Feb 2025).
- Square-root covariance formulations via QR ensure stability in probabilistic ODE solvers and GP regression (Bosch et al., 2023).
Limiting factors include:
- Cost per time step grows as due to matrix algebra.
- For very large (e.g., ), memory and compute become bottlenecks.
- Exactness applies only to linear-Gaussian and state-space models; nonlinear latent states require further generalization (e.g., sigma-point or extended KF variants).
- Prefix-scan methods require enough temporal steps ( available cores) to amortize kernel launch and synchronization overhead (Särkkä et al., 13 Nov 2025).
6. Comparative Performance and Practical Recommendations
Empirical analysis reveals:
- On GPUs, the Blelloch and Ladner–Fischer scans provide optimal work/span for ; for smaller , simpler scans (Hillis–Steele/Sengupta B) may have lower overhead.
- Two-filter smoothers excel for very large if two GPUs are available. On a single GPU, standard PRTS scan-based smoothers are simpler and slightly faster.
- Speedups of up to 500–750× (GPU) and 47× (multi-core CPU) are attainable for sufficiently large and moderate (Corenflos et al., 2021, Gargir et al., 17 Feb 2025, Särkkä et al., 13 Nov 2025). Memory footprint considerations favor in-place scans for massive .
| Algorithm | Complexity (work/span) | Speedup | Typical Hardware |
|---|---|---|---|
| Assoc. Scan (PSSGP, PRTS) | / | $10$– | GPU, multicore CPU |
| Odd–Even QR (UltimateKF) | / | $20$– | Multicore CPU |
| DD-KF (space-time) | scaling | Near-quadratic in | Distributed clusters |
7. Connections, Limitations, and Future Directions
Parallel-in-time filtering and smoothing represent a fundamental shift in temporal algorithm design for Bayesian state estimation. They subsume and outperform prior sparse-matrix and block-tridiagonal methods that lacked black-box associative scan capacity (Corenflos et al., 2021). Ongoing research explores:
- Nonlinear extensions via extended/sigma-point scans (Corenflos et al., 2021, Bosch et al., 2023).
- Distributed-memory, multi-GPU implementations for massive time series.
- Integration with variational and sparse GP techniques to alleviate and scaling (Corenflos et al., 2021).
- Optimized kernels for memory and synchronization efficiency in practical deployment (Särkkä et al., 13 Nov 2025).
- Space–time decomposition for complementary spatial parallelism (D'Amore et al., 2023).
A plausible implication is that for appropriate models and hardware, parallel-in-time Kalman filtering and smoothing are poised to become the standard for high-throughput, high-dimensional, recursive Bayesian estimation in temporal applications. Limitations regarding model generality, numerical stability, and per-step computational cost remain areas of active development.