Quantitative characterization and mitigation of parallel-to-sequential divergence in PHCSSM
Quantitatively characterize the numerical divergence between PHCSSM’s parallel multi-transmission loop inference (Jacobi-style fixed-point iteration) and its sequential recurrent spiking neural network (RSNN) execution (Gauss–Seidel causal trajectory) and develop bridging strategies, such as post-training sequential fine-tuning, to reduce or eliminate the discrepancy in their attractors and outputs.
References
The parallel-to-sequential transition introduces a numerical divergence analogous to the accuracy gap documented in the ANN-to-SNN conversion literature, but arising from a different mechanism: the multi-transmission loop computes a sequence-level fixed point via Jacobi-style parallel iteration, whereas step-by-step RSNN execution follows a Gauss--Seidel causal trajectory that may converge to a distinct attractor due to the discontinuous spike threshold. Quantitative characterization of this divergence and bridging strategies (e.g., post-training sequential fine-tuning) are deferred to future work.