Papers
Topics
Authors
Recent
Search
2000 character limit reached

Normalized Path Signature (LNPS)

Updated 23 February 2026
  • LNPS is a mathematical descriptor that normalizes iterated-integral path signatures by powers of the path length, ensuring compact and invariant representations.
  • It employs a truncated signature expansion with dynamic programming to efficiently compute features on sliding windows, making real-time processing feasible.
  • LNPS has been applied in online biometric signature verification using RNNs, achieving state-of-the-art error rates by capturing essential geometric and asymptotic properties.

The length-normalized path signature (LNPS) is a mathematical descriptor for continuous paths of bounded variation, designed to achieve compactness, scale-invariance, and (via suitable linear combinations) rotation-invariance. LNPS is constructed by normalizing the coefficients of the truncated iterated-integral (path signature) expansion by powers of the path length. Originally proposed and validated in the context of online dynamic pen-trajectories for biometric signature verification, LNPS builds on foundational results in the theory of path signatures and has well-understood geometric and asymptotic properties (Lai et al., 2017, Boedihardjo et al., 2020).

1. Truncated Path Signature and Length-Normalization

Let γ:[0,T]Rd\gamma:[0,T] \to \mathbb{R}^d be a continuous path of bounded variation, or its discrete (piecewise-linear) sampling. The level-kk iterated integral of γ\gamma is

Si1ikk(γ)=0<t1<<tk<Tdγi1(t1)dγik(tk)S^k_{i_1 \ldots i_k}(\gamma) = \int_{0 < t_1 < \cdots < t_k < T} d\gamma^{i_1}(t_1) \cdots d\gamma^{i_k}(t_k)

with ij{1,,d}i_j \in \{1, \ldots, d\}. The truncated path signature up to level mm is

S(γ)m=[S0;S1;;Sm]S(\gamma)|_m = [S^0; S^1; \ldots; S^m]

where SkS^k spans all multi-indices of order kk. In practice, the kkth level is interpreted as a dkd^k-dimensional tensor. Discrete approximations sum over products of local path increments:

Sk(γ)0<n1<<nkNΔγn1ΔγnkS^k(\gamma) \approx \sum_{0 < n_1 < \ldots < n_k \leq N} \Delta\gamma_{n_1} \otimes \cdots \otimes \Delta\gamma_{n_k}

with (Δγn=γ(tn)γ(tn1))(\Delta\gamma_n = \gamma(t_n) - \gamma(t_{n-1})).

Define the total path length

(γ)=0Tγ(t)dt\ell(\gamma) = \int_0^T \|\gamma'(t)\|dt

or discretely as (γ)=n=1NΔγn\ell(\gamma) = \sum_{n=1}^N \|\Delta \gamma_n\|.

The length-normalized path signature up to order mm is then

SLN(γ)m=[1;S1;S22;;Smm]TS^{LN}(\gamma)|_m = \left[1; \frac{S^1}{\ell}; \frac{S^2}{\ell^2}; \ldots; \frac{S^m}{\ell^m}\right]^T

with each level-kk entry divided by k\ell^k (Lai et al., 2017).

2. Invariance Properties

Scale Invariance

If γcγ\gamma \mapsto c\cdot\gamma for some c>0c>0, then

Δ(cγ)=cΔγ\Delta(c\cdot\gamma) = c\Delta\gamma

so Sk(cγ)=ckSk(γ)S^k(c\cdot\gamma) = c^k S^k(\gamma) and (cγ)=c(γ)\ell(c\cdot\gamma) = c\ell(\gamma). Thus,

LNPSk(cγ)=Sk(cγ)[(cγ)]k=ckSk(γ)(c(γ))k=LNPSk(γ)LNPS^k(c\cdot\gamma) = \frac{S^k(c\cdot\gamma)}{[\ell(c\cdot\gamma)]^k} = \frac{c^k S^k(\gamma)}{(c\ell(\gamma))^k} = LNPS^k(\gamma)

demonstrating strict scale-invariance at every order (Lai et al., 2017).

Rotation Invariance

For RSO(d)R \in SO(d), Sk(Rγ)=(RR)Sk(γ)S^k(R\,\gamma) = (R\otimes\cdots\otimes R)S^k(\gamma). Any contraction of SkS^k with an SO(d)SO(d)-invariant tensor yields a rotation-invariant scalar. In 2D, at k=2k=2, the antisymmetric form

A(γ)=12(S122S212)A(\gamma) = \frac{1}{2}(S^2_{12} - S^2_{21})

gives the signed area enclosed by the path (by Green's theorem), invariant under SO(2)SO(2). Higher-order invariants can be constructed via representation theory of SO(d)SO(d) (Lai et al., 2017).

3. Asymptotic and Geometric Characterization

The normalized signature is central to the isometry conjecture: for a path γ\gamma of finite length LL, the normalized nnth-level signature satisfies

Sn(γ)πLnn!\|S_n(\gamma)\|_{\pi} \leq \frac{L^n}{n!}

where π\|\cdot\|_{\pi} is the projective tensor norm.

The conjecture (proved in (Boedihardjo et al., 2020) for planar paths with tree-reduced property or local angle-boundedness) is

L1(γ)limnn!Sn(γ)π1/n=Length(γ)=LL_1(\gamma) \equiv \lim_{n\to\infty} \left\| n! S_n(\gamma) \right\|_{\pi}^{1/n} = \mathrm{Length}(\gamma) = L

for all tree-reduced paths. This demonstrates that the normalized signature encodes global, geometric information about the underlying curve, supplying a natural feature for identity and invariance.

4. Practical Computation and Complexity

LNPS is computed on sliding windows of size WW across sequences of length NN. At each window centered at nn, a local subpath is extracted, iterated sums up to level mm are computed, and normalization by the local k\ell^k is performed at every order. Each LNPS channel is zz-normalized over the sequence.

Naively, computation of all SkS^k at level kk in one window would require O(Wkdk)O(W^k d^k) operations, but dynamic programming (Chen's identity) reduces this to O(mdm)O(m d^m). Overall complexity per window is O(mdm)O(m d^m), leading to total cost O(Nmdm)O(N m d^m). For standard settings (d=2d=2, m=2m=2 or $3$, W9W\approx 9–$13$), real-time computation for data sampled at 100 Hz is achieved (Lai et al., 2017).

5. Application to Online Signature Verification with Neural Sequence Models

LNPS vectors are used as input features to an RNN for biometric verification. The data pipeline is as follows:

  • Input: D=[SLN(d(1))m,SLN(d(2))m,...,SLN(d(N))m]D = [S^{LN}(d(1))|_m, S^{LN}(d(2))|_m, ..., S^{LN}(d(N))|_m], with d(n)d(n) being the WW-window centered at nn.
  • Architecture: a two-layer Gated Recurrent Unit (GRU) with 128 units per layer, followed by a 64-dimensional fully connected output.
  • Training: employs a triplet loss (pushing genuine-positive pairs closer than forgeries by a margin C=1C=1) and a center loss (clustering per-client signatures) with regularization. The final loss:

L=Lt+λcLc+λdecayW2L = L_t + \lambda_c L_c + \lambda_{decay}\|W\|^2

with λc=0.5\lambda_c = 0.5, λdecay=104\lambda_{decay} = 10^{-4}.

Triplet loss:

Lt=tripletsmax{G(g)G(p)G(g)G(f)+C,0}L_t = \sum_{\mathrm{triplets}} \max\{ \|G(g) - G(p)\| - \|G(g) - G(f)\| + C, 0\}

Center loss:

Lc=i(G(gi)ci+G(pi)ci)L_c = \sum_{i} \left( \|G(g_i) - c_i\| + \|G(p_i) - c_i\| \right)

where G()G(\cdot) is the RNN embedding, g,p,fg,p,f denote anchor, positive, and forgery, and cic_i is the center for client ii (Lai et al., 2017).

6. Empirical Results and Comparative Performance

LNPS, when used alone in a DTW-based verification framework, demonstrated progressive gains at higher signature levels:

  • Level 1 (first order): ≈8.1–8.7% EER
  • Level 2: ≈5.4–6.6% EER
  • Level 4: 4.98% EER (best)
  • Using rotation-invariant features up to level 4: 5.26% EER

Integration of LNPS with the RNN system (when trained jointly on SVC-2004 and MCYT-100) using N=10N=10 templates, window W=9W=9, m=2m=2 achieved a 2.37% EER (state of the art at publication). Ablation eliminating LNPS to use only (Δx,Δy)(\Delta x, \Delta y) features resulted in substantially degraded EER (\sim9.0%). Excluding MCYT-100 from training increased EER to 3.58%; adding more clients improved performance, converging to 2.37% (Lai et al., 2017).

7. Role in Path Geometry and Fundamental Results

LNPS captures essential geometric aspects of trajectories beyond the biometric context. As established in the theoretical analysis of (Boedihardjo et al., 2020), the correct asymptotic normalization of the path signature retrieves the path length under mild geometric (tree-reduced or local angle-constrained) conditions. The framework utilizes a Cartan development into SL2(R){\rm SL}_2(\mathbb{R}), decoupling the signature asymptotics into radial and angular ODEs, with length recovery intimately linked to the angular evolution of the path. These properties affirm the foundational role of LNPS in geometric data analysis, stochastic processes, and sequential learning.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Normalized Multiple Sequence Alignment (NMSA).