Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sliding Window Path Signature (SW-PS)

Updated 9 February 2026
  • SW-PS is a feature extraction method that computes translation- and rotation-invariant representations of trajectory data using a sliding window approach of the path signature transform.
  • The methodology integrates local algebraic features with Linear Recurrent Units to enable efficient, parallelized sequence modeling, significantly improving convergence and accuracy.
  • Empirical results demonstrate SW-PS’s scalability and robustness in online handwritten recognition tasks, achieving high accuracy even under extreme geometric transformations.

The Sliding Window Path Signature (SW-PS) is a feature extraction methodology that combines concepts from rough path theory and modern sequence modeling. SW-PS is designed to compute succinct, translation- and rotation-invariant representations of streaming trajectory data—such as online handwritten character strokes—by applying the path signature transform within a moving window over the signal. This yields local, algebraic features that can then be input to efficient sequence models such as Linear Recurrent Units (LRUs). The SW-PS approach is principally motivated by the need for robust sequence representations in the presence of deformations (e.g., rotation, translation) and non-stationary statistics across a signal, while maintaining computational tractability for both training and inference on long sequences.

1. Mathematical Foundations of the Sliding Window Path Signature

SW-PS builds on the path signature map: a construction from stochastic analysis where a continuous path is embedded into a graded tensor algebra via iterated integrals. For a dd-dimensional path XtRdX_t \in \mathbb{R}^d on [a,b][a, b], the order-mm signature over interval [s,t][s, t] is defined as

S(m)(X)[s,t]={s<u1<<um<tdXu1dXum}m1S^{(m)}(X)_{[s,t]} = \left\{ \int_{s<u_1<\cdots<u_m<t} dX_{u_1} \otimes \cdots \otimes dX_{u_m} \right\}_{m\geq 1}

The key properties of the signature transform include:

  • Translation invariance: S(m)(X+c)=S(m)(X)S^{(m)}(X+c) = S^{(m)}(X) for any constant cc.
  • Algebraic universality: The truncated path signature up to degree MM forms a feature space dense in the space of continuous functionals on fixed-length paths.

The "sliding window" variant computes the truncated path signature of order MM on each window [t,t+)[t, t+\ell) as the window moves over sampled path data {xk}\{x_k\}, yielding the SW-PS feature sequence: SW-PSt(M,)=S(M)({xt,,xt+1})\text{SW-PS}_{t}^{(M,\ell)} = S^{(M)}(\{x_{t},\ldots,x_{t+\ell-1}\}) The window length \ell and signature truncation order MM are hyperparameters governing spatial and geometric locality, respectively.

2. SW-PS in Online Handwritten Character Recognition

The canonical application of SW-PS is in online handwritten character recognition tasks where each character is captured as a sequence of 2D or 3D pen positions. The rich local structure encoded by short path signatures captures distinctive geometric primitives (corners, curves, directional changes) and is robust to global transformations.

In "Rotation-free Online Handwritten Character Recognition Using Linear Recurrent Units" (Ling et al., 2 Feb 2026), the SW-PS is used to map each sliding window of a stroke sequence into a 90-dimensional vector (with =5\ell = 5, M=3M = 3). This high-level procedure is:

  • For each tt in the stroke sequence, compute SW-PSt(M=3,=5)\text{SW-PS}_{t}^{(M=3,\ell=5)}.
  • Stack the SW-PS features into a new sequence {xt}\{x_t'\} for input to an LRU-based classifier. This approach yields rotation invariance and substantial gains in accuracy and convergence robustness under adversarial geometric perturbations.

3. Integration with Linear Recurrent Units

SW-PS is designed specifically to provide input features compatible with sequence models that efficiently process long dependence structures. LRUs are an optimal downstream model:

  • The LRU maintains a hidden state via a linear recurrence ht=Aht1+Bxth_t = A h_{t-1} + B x_t where AA is complex-diagonalizable, yielding stable evolution and efficient parallelism (Yue et al., 2023, Orvieto et al., 2023).
  • The stacked LRU blocks, often with normalization, nonlinear gating such as GELU/GLU, and residual connections, process the SW-PS feature sequence and yield class probabilities after pooling and MLP projection.

Combined, SW-PS and LRU architectures yield:

  • O(T) memory and compute per sequence for inference.
  • Fully parallelized training via diagonalization and convolutional realization of the recurrence.
  • Empirically, these systems converge faster and outperform RNNs and self-attention models on stroke-recognition benchmarks (Ling et al., 2 Feb 2026).

4. Computational Complexity and Parallelism

The computational efficiency of SW-PS+LRU arises from both the local nature of SW-PS and the algebraic structure of LRUs:

  • The SW-PS feature extraction scales as O(LdM)O(L \ell d^M), where LL is the sequence length, due to overlapping windows and polynomial signature enumeration.
  • LRUs, after diagonalization of AA, admit per-step O(d)O(d) cost and a parallel scan implementation that achieves O(logT)O(\log T) evaluation depth for sequence of length TT (Orvieto et al., 2023, Yue et al., 2023).
  • Compared to attention-based models with O(T2)O(T^2) cost, the SW-PS+LRU workflow retains suitability for real-time and embedded applications.

5. Empirical Results and Robustness Properties

In recognition tasks with extreme geometric augmentation (random rotation up to ±180\pm 180^\circ), the SW-PS+LRU system exceeds prior models in test accuracy (99.62%99.62\% for digits, 96.67%96.67\% for uppercase English letters, 94.33%94.33\% for Chinese radicals) on CASIA-OLHWDB1.1 (Ling et al., 2 Feb 2026). The SW-PS transforms provide robust, invariant descriptors, enabling rapid convergence and accuracy advantages. The system is also conducive to ensemble learning, further improving generalization.

6. Implementation Considerations and Limitations

Extending SW-PS to new domains requires:

  • Careful tuning of window length \ell and signature degree MM to balance locality and capacity.
  • Memory management for overlapping window computation in long sequences.
  • Efficient software, possibly leveraging fast signature computation libraries and specialized CUDA kernels for LRU parallel scan.

A possible limitation is the cubic growth of feature dimension in signature order MM, which may be addressed by truncation, sparsity, or learned selection.

7. Significance and Prospects

The SW-PS paradigm demonstrates the practicality of algebraic, windowed feature transforms combined with state-space inspired sequence models such as LRUs. This approach achieves strong invariance properties and competitive performance for time-series and spatio-temporal recognition, with a favorable complexity profile and suitability for parallel hardware (Ling et al., 2 Feb 2026, Yue et al., 2023). Further research may focus on automatic selection of window parameters, integration with end-to-end differentiable models, and extension of SW-PS to higher-dimensional or multimodal sequence data.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sliding Window Path Signature (SW-PS).