Length-Normalized Path Signature (LNPS)
- LNPS is a scale-invariant descriptor that normalizes iterated integrals by the total path length to create a compact, discriminative representation.
- It employs mathematical techniques to capture geometric features and achieves rotation invariance through linear combinations of signature components.
- Empirical studies in online signature verification demonstrate that LNPS, when integrated with RNNs, significantly reduces error rates in pattern recognition tasks.
The length-normalized path signature (LNPS) is a mathematical descriptor for characterizing continuous or discrete paths, especially in applications requiring invariance to scale and, via linear combination, to rotation. Defined in terms of iterated integrals and normalized by the total path length, LNPS provides a compact, discriminative, and theoretically grounded summary of path geometry. Its principal applications include online signature verification and, more broadly, the analysis of planar paths with bounded variation. LNPS builds upon the classical path signature framework, with normalization yielding desirable invariance and stability properties and facilitating principled comparisons and learning in pattern recognition contexts (Lai et al., 2017, Boedihardjo et al., 2020).
1. Formal Definition of the Path Signature and LNPS
Let be a continuous path of bounded variation. The (truncated) path signature to level is defined via iterated integrals as
where and, for any multi-index with ,
For the discrete case with increments , iterated sums approximate the integrals, with dynamic programming used to reduce complexity via Chen's identity.
Length-normalization uses the total path length, in the continuous case or for sampled data. The level- component of the LNPS is
leading to the overall length-normalized path signature up to level : This construction yields a descriptor that is scale-invariant by design (Lai et al., 2017).
2. Theoretical Invariance Properties
Scale Invariance
Normalization by guarantees invariance to scaling of the path. Specifically, for any scalar , and . Consequently,
Rotation Invariance (Via Linear Combinations)
For , , ensuring that any contraction with an -invariant multilinear form yields a rotation-invariant scalar. In , the antisymmetric second-order combination
is rotation-invariant and corresponds to the signed area enclosed by (Lai et al., 2017).
3. Computational Methodology and Complexity
LNPS is computed for each window of length along the path. At each center index , the local path given by is processed to compute all up to , normalized by , and each channel is z-normalized across the full sample. For each window, naive complexity is , but application of Chen's identity reduces this to per window. Total cost per signatures of length is . For user verification tasks sampled at 100 Hz, typical parameters are , or $3$, , permitting real-time computation (Lai et al., 2017).
4. LNPS Asymptotics and Path Length Recovery
Theoretical work on the normalized signature, particularly by Boedihardjo and Geng, establishes that, for planar paths of bounded variation, the path length can be recovered asymptotically from the normalized signature: where is the -th level iterated integral and is the projective tensor norm (Boedihardjo et al., 2020). This isometry property holds whenever the path is tree-reduced, i.e., possesses no nontrivial tree-like subarcs. The proof uses development into , with the key technical component being the decoupling of the associated ODEs into radial and angular dynamics and the demonstration that, under appropriate angle-bound conditions, the signature growth asymptotically matches the true path length.
5. Integration into RNN-based Signature Verification
LNPS descriptors are used as input sequences for recurrent neural networks in online signature verification. Specifically, a two-layer gated recurrent unit (GRU, 128 units each) is followed by a 64-dimensional fully connected layer to yield the embedding .
Training employs a combination of triplet loss (to push distances between forgeries and genuine signatures above a margin while tightening intra-class distances) and a center loss (to cluster each client's signatures around a learned center ), with the final loss: where , , and , as specified in (Lai et al., 2017). Backpropagation operates to maintain both inter-class separation and intra-class compactness.
6. Empirical Performance and Applications
In dynamic time warping (DTW) frameworks, LNPS achieves the following equal error rates (EER) on the SVC-2004 dataset:
| LNPS Level | EER (%) |
|---|---|
| Level 1 (I¹/ℓ) | 8.1–8.7 |
| Level 2 (I²/ℓ²) | 5.4–6.6 |
| Level 4 (I⁴/ℓ⁴) | 4.98 (best) |
| Rot-inv up to 4 | 5.26 |
When combined with RNNs in a metric learning setup and using genuine templates, , , and joint training on SVC-2004+MCYT-100, state-of-the-art EER of 2.37% is obtained. Notably, using only as input without LNPS degrades performance to EER. Joint training with additional clients further lowers EER, evidencing the benefit of LNPS as a compact, (length-)stable, and discriminative path representation (Lai et al., 2017).
7. Structural and Geometric Significance
LNPS embeds rich geometric information by capturing all low-order polynomial features of the path, with length-normalization eliminating sensitivity to speed or global scale. The ability to extract invariant scalars (e.g., signed area) via linear contraction with invariant tensors is especially useful in applications where orientation variability is significant. The length-recovery asymptotic further connects LNPS to foundational questions in rough path theory and geometry, such as the classification and measurement of curves in terms of their signatures (Boedihardjo et al., 2020). This interplay establishes LNPS as a theoretically robust and practically effective tool in computational pattern analysis.