Papers
Topics
Authors
Recent
Search
2000 character limit reached

Length-Normalized Path Signature (LNPS)

Updated 23 February 2026
  • LNPS is a scale-invariant descriptor that normalizes iterated integrals by the total path length to create a compact, discriminative representation.
  • It employs mathematical techniques to capture geometric features and achieves rotation invariance through linear combinations of signature components.
  • Empirical studies in online signature verification demonstrate that LNPS, when integrated with RNNs, significantly reduces error rates in pattern recognition tasks.

The length-normalized path signature (LNPS) is a mathematical descriptor for characterizing continuous or discrete paths, especially in applications requiring invariance to scale and, via linear combination, to rotation. Defined in terms of iterated integrals and normalized by the total path length, LNPS provides a compact, discriminative, and theoretically grounded summary of path geometry. Its principal applications include online signature verification and, more broadly, the analysis of planar paths with bounded variation. LNPS builds upon the classical path signature framework, with normalization yielding desirable invariance and stability properties and facilitating principled comparisons and learning in pattern recognition contexts (Lai et al., 2017, Boedihardjo et al., 2020).

1. Formal Definition of the Path Signature and LNPS

Let γ ⁣:[0,T]Rd\gamma\colon[0,T]\rightarrow\mathbb{R}^d be a continuous path of bounded variation. The (truncated) path signature to level mm is defined via iterated integrals as

S(γ)m=[S0;  S1;  S2;  ;  Sm]S(\gamma)|_m = [\,S^0;\; S^1;\; S^2;\; …;\; S^m\,]

where S0(γ)=1S^0(\gamma) = 1 and, for any multi-index (i1,,ik)(i_1, \ldots, i_k) with ij{1,,d}i_j \in \{1, \ldots, d\},

Si1ikk(γ)=0<t1<<tk<Tdγi1(t1)dγik(tk).S^k_{i_1\dots i_k}(\gamma) = \int_{0 < t_1 < \ldots < t_k < T} d\gamma^{i_1}(t_1)\ldots d\gamma^{i_k}(t_k).

For the discrete case with increments Δγn=γ(tn)γ(tn1)\Delta\gamma_n = \gamma(t_n) - \gamma(t_{n-1}), iterated sums approximate the integrals, with dynamic programming used to reduce complexity via Chen's identity.

Length-normalization uses the total path length, (γ)=0Tγ(t)dt\ell(\gamma) = \int_0^T \|\gamma'(t)\|dt in the continuous case or (γ)=n=1NΔγn\ell(\gamma) = \sum_{n=1}^N \|\Delta\gamma_n\| for sampled data. The level-kk component of the LNPS is

LNPSi1ikk(γ)=Si1ikk(γ)/(γ)k,\mathrm{LNPS}^k_{i_1\dots i_k}(\gamma) = S^k_{i_1\dots i_k}(\gamma)/\ell(\gamma)^k,

leading to the overall length-normalized path signature up to level mm: SLN(γ)m=[1;  S1/;  S2/2;  ;  Sm/m]T.S^{\mathrm{LN}}(\gamma)|_m = [\,1;\; S^1/\ell;\; S^2/\ell^2;\; …;\; S^m/\ell^m\,]^T. This construction yields a descriptor that is scale-invariant by design (Lai et al., 2017).

2. Theoretical Invariance Properties

Scale Invariance

Normalization by (γ)k\ell(\gamma)^k guarantees invariance to scaling of the path. Specifically, for any scalar c>0c > 0, Sk(cγ)=ckSk(γ)S^k(c\gamma) = c^kS^k(\gamma) and (cγ)=c(γ)\ell(c\gamma) = c \ell(\gamma). Consequently,

LNPSk(cγ)=Sk(cγ)/[(cγ)]k=Sk(γ)/[(γ)]k=LNPSk(γ).\mathrm{LNPS}^k(c\gamma) = S^k(c\gamma)/[\,\ell(c\gamma)\,]^k = S^k(\gamma)/[\,\ell(\gamma)\,]^k = \mathrm{LNPS}^k(\gamma).

Rotation Invariance (Via Linear Combinations)

For RSO(d)R \in SO(d), Sk(Rγ)=(RR)Sk(γ)S^k(R\gamma) = (R \otimes \cdots \otimes R)S^k(\gamma), ensuring that any contraction with an SO(d)SO(d)-invariant multilinear form yields a rotation-invariant scalar. In d=2d=2, the antisymmetric second-order combination

A(γ)=12(S122S212)A(\gamma) = \frac{1}{2}(S^2_{12} - S^2_{21})

is rotation-invariant and corresponds to the signed area enclosed by γ\gamma (Lai et al., 2017).

3. Computational Methodology and Complexity

LNPS is computed for each window of length WW along the path. At each center index nn, the local path γn\gamma_n given by γnw,,γn+w\gamma_{n-w},\ldots,\gamma_{n+w} is processed to compute all SkS^k up to mm, normalized by (γn)k\ell(\gamma_n)^k, and each channel is z-normalized across the full sample. For each window, naive complexity is O(Wkdk)O(W^k d^k), but application of Chen's identity reduces this to O(mdm)O(m d^m) per window. Total cost per signatures of length NN is O(Nmdm)O(N m d^m). For user verification tasks sampled at 100 Hz, typical parameters are d=2d=2, m=2m=2 or $3$, W913W\approx9-13, permitting real-time computation (Lai et al., 2017).

4. LNPS Asymptotics and Path Length Recovery

Theoretical work on the normalized signature, particularly by Boedihardjo and Geng, establishes that, for planar paths of bounded variation, the path length LL can be recovered asymptotically from the normalized signature: L=limnn!Sn(γ)π1/n,L = \lim_{n\to\infty} \Big\| n! S_n(\gamma) \Big\|_\pi^{1/n}, where Sn(γ)S_n(\gamma) is the nn-th level iterated integral and π\|\cdot\|_\pi is the projective tensor norm (Boedihardjo et al., 2020). This isometry property holds whenever the path γ\gamma is tree-reduced, i.e., possesses no nontrivial tree-like subarcs. The proof uses development into SL2(R){\rm SL}_2(\mathbb{R}), with the key technical component being the decoupling of the associated ODEs into radial and angular dynamics and the demonstration that, under appropriate angle-bound conditions, the signature growth asymptotically matches the true path length.

5. Integration into RNN-based Signature Verification

LNPS descriptors are used as input sequences for recurrent neural networks in online signature verification. Specifically, a two-layer gated recurrent unit (GRU, 128 units each) is followed by a 64-dimensional fully connected layer to yield the embedding G(D)R64G(D) \in \mathbb{R}^{64}.

Training employs a combination of triplet loss (to push distances between forgeries and genuine signatures above a margin C=1C=1 while tightening intra-class distances) and a center loss (to cluster each client's signatures around a learned center cic_i), with the final loss: L=Lt+λcLc+λdecayW2L = L_t + \lambda_c L_c + \lambda_{\text{decay}} \|W\|^2 where λc=0.5\lambda_c=0.5, λdecay=104\lambda_{\text{decay}}=10^{-4}, and LtL_t, LcL_c as specified in (Lai et al., 2017). Backpropagation operates to maintain both inter-class separation and intra-class compactness.

6. Empirical Performance and Applications

In dynamic time warping (DTW) frameworks, LNPS achieves the following equal error rates (EER) on the SVC-2004 dataset:

LNPS Level EER (%)
Level 1 (I¹/ℓ) 8.1–8.7
Level 2 (I²/ℓ²) 5.4–6.6
Level 4 (I⁴/ℓ⁴) 4.98 (best)
Rot-inv up to 4 5.26

When combined with RNNs in a metric learning setup and using N=10N=10 genuine templates, W=9W=9, m=2m=2, and joint training on SVC-2004+MCYT-100, state-of-the-art EER of 2.37% is obtained. Notably, using only (Δx,Δy)(\Delta x, \Delta y) as input without LNPS degrades performance to 9.0%\sim9.0\% EER. Joint training with additional clients further lowers EER, evidencing the benefit of LNPS as a compact, (length-)stable, and discriminative path representation (Lai et al., 2017).

7. Structural and Geometric Significance

LNPS embeds rich geometric information by capturing all low-order polynomial features of the path, with length-normalization eliminating sensitivity to speed or global scale. The ability to extract invariant scalars (e.g., signed area) via linear contraction with invariant tensors is especially useful in applications where orientation variability is significant. The length-recovery asymptotic further connects LNPS to foundational questions in rough path theory and geometry, such as the classification and measurement of curves in terms of their signatures (Boedihardjo et al., 2020). This interplay establishes LNPS as a theoretically robust and practically effective tool in computational pattern analysis.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Length-Normalized Path Signature (LNPS).