Quantitative characterization and mitigation of parallel-to-sequential divergence in PHCSSM

Quantitatively characterize the numerical divergence between PHCSSM’s parallel multi-transmission loop inference (Jacobi-style fixed-point iteration) and its sequential recurrent spiking neural network (RSNN) execution (Gauss–Seidel causal trajectory) and develop bridging strategies, such as post-training sequential fine-tuning, to reduce or eliminate the discrepancy in their attractors and outputs.

Background

PHCSSM can be executed both as a parallel fixed-point iteration via the multi-transmission loop and as a standard sequential RSNN. Due to discontinuous spike thresholds and differing update schemes (Jacobi vs. Gauss–Seidel), these two execution modes can converge to distinct attractors, leading to numerical divergence.

The authors explicitly defer a quantitative study of this divergence and methods to bridge it—such as post-training fine-tuning in the sequential regime—highlighting an unresolved methodological gap relevant to deployment and interpretability.

References

The parallel-to-sequential transition introduces a numerical divergence analogous to the accuracy gap documented in the ANN-to-SNN conversion literature, but arising from a different mechanism: the multi-transmission loop computes a sequence-level fixed point via Jacobi-style parallel iteration, whereas step-by-step RSNN execution follows a Gauss--Seidel causal trajectory that may converge to a distinct attractor due to the discontinuous spike threshold. Quantitative characterization of this divergence and bridging strategies (e.g., post-training sequential fine-tuning) are deferred to future work.

Parallelized Hierarchical Connectome: A Spatiotemporal Recurrent Framework for Spiking State-Space Models  (2604.01295 - Chiang, 1 Apr 2026) in Discussion — paragraph on parallel-to-sequential divergence