Papers
Topics
Authors
Recent
2000 character limit reached

State Transition Event Timeseries (STE-ts)

Updated 8 December 2025
  • STE-ts is a formalism that captures explicit, timestamped state transitions and durations in discrete event systems for clear temporal analysis.
  • The STH metric leverages both event times and durations, significantly reducing computational costs compared to traditional resampling techniques.
  • STE-ts underpins machine learning models like EvoNet and eFHMM, facilitating scalable event prediction in applications from NILM to industrial monitoring.

A State Transition Event Timeseries (STE-ts) is a formalism for representing and analyzing the sequence of explicit, timestamped state changes in discrete event systems. STE-ts captures the dual nature of stateful and event-driven dynamics in time-varying systems by encoding both the timing and nature of transitions, as well as durations-in-state. This enables highly efficient and interpretable modeling, pattern search, and downstream inference in domains ranging from industrial monitoring to sequence event prediction (Marié et al., 1 Dec 2025, Hu et al., 2019, Yan et al., 2021).

1. Formal Definition and Representation

Let S={o0,o1,}S = \{ o_0, o_1, \ldots \} denote the finite set of possible states of a discrete event system (DES). An STE-ts is characterized by an ordered sequence of state-changing transitions each defined as tr=(b,a)S×St_r = (b, a) \in S \times S with bab \neq a. These transitions are augmented with precise timestamps tkRt_k \in \mathbb{R}, alongside an explicit start time t0t_0 and end time tn+1t_{n+1}: s={(tk,Ok)}k=0ns = \{ (t_k, O_k) \}_{k=0}^n where OkSO_k \in S is the state valid on the interval [tk,tk+1)[t_k, t_{k+1}) and Δk=tk+1tk\Delta_k = t_{k+1} - t_k gives the duration of each state. This representation ensures that the system's temporal evolution is encoded explicitly as a series of intervals, each associated with a unique state and corresponding duration (Marié et al., 1 Dec 2025).

A related but less granular structure is the STE-seq, an ordered list of transitions {trk=(bk,ak)}k=1n\{ t_{r_k} = (b_k, a_k) \}_{k=1}^n, which induces a categorical sequence of n+1n+1 states, without explicit timestamps or durations.

2. Algorithms and Metrics for STE-ts Comparison

Traditional similarity and distance measures for time series—based on resampling or categorical alignment—fail to exploit the explicit event/duration structure of STE-ts and commonly incur distortion or inefficiency for large event counts. The Selective Temporal Hamming (STH) framework generalizes prior temporal Hamming and Jaccard approaches by leveraging both transition times and durations-in-state, without incurring the resampling bottleneck.

The computation proceeds as follows:

  1. Form the union of all event times from two STE-ts sis_i and sjs_j, partitioning the intersection interval [max(ti0,tj0),min(ti,ni+1,tj,nj+1)][ \max(t_{i0}, t_{j0}), \min(t_{i,n_i+1}, t_{j,n_j+1}) ] into subintervals k=[τk,τk+1)\ell_k = [\tau_k, \tau_{k+1}).
  2. For each k\ell_k, assign states (oik,ojk)(o_{ik}, o_{jk}) and duration Δk\Delta_k.
  3. Partition SS into three sets: SIS_I (states of interest), SoS_o (other/neutral), and SES_E (excluded/“don't care”) states.
  4. Define a state similarity sim:S×S[0,1]\mathrm{sim}: S \times S \to [0, 1].
  5. The STH similarity and distance are

STH(SI,So)(si,sj)={undefinedif Y=0 X/Yotherwise\mathrm{STH}_{(S_I,S_o)}(s_i, s_j) = \begin{cases} \mathrm{undefined} & \text{if } Y = 0 \ X / Y & \text{otherwise} \end{cases}

with

X=k:oik,ojkSEsim(oik,ojk)Δk,Y=k:oikSI or ojkSIΔkX = \sum_{k: o_{ik}, o_{jk} \notin S_E} \mathrm{sim}(o_{ik}, o_{jk}) \cdot \Delta_k,\,\, Y = \sum_{k: o_{ik} \in S_I \text{ or } o_{jk} \in S_I} \Delta_k

and associated distance STHD=1STH\mathrm{STHD} = 1 - \mathrm{STH} (Marié et al., 1 Dec 2025).

This matching runs in linear time O(ni+nj)O(n_i + n_j) and, for suitable choices of SIS_I, SoS_o, and SES_E, recovers resampled Hamming or Jaccard as limiting cases.

3. Machine Learning on STE-ts: Dynamic Graph and GNN Models

STE-ts are ideally suited for structured, interpretable learning approaches that capture both local transition dynamics and global sequence evolution. The Evolutionary State Graph (ESG) framework models a multivariate series {x1,,xτT}\{ \mathbf{x}_1, \dots, \mathbf{x}_{\tau T} \} segmented into TT intervals of length τ\tau, each assigned soft membership weights over a set of learned prototypical states {Θv}\{ \bm{\Theta}_v \}, inferred via clustering, shapelets, SAX, or Gaussian Mixture Models (Hu et al., 2019).

Dynamic, directed weighted graphs G(t)=(V,E(t),{m(v,v)(t)})G^{(t)} = (\mathcal{V}, \mathcal{E}^{(t)}, \{ m_{(v,v')}^{(t)} \}) are constructed for t=2,,Tt = 2, \dots, T, where each edge (v,v)(v, v') encodes the joint transition weight between states in consecutive segments: m(v,v)(t)=P(ΘvXt1)P(ΘvXt)m_{(v,v')}^{(t)} = P(\bm{\Theta}_v | \mathbf{X}_{t-1}) \cdot P(\bm{\Theta}_{v'} | \mathbf{X}_t) Pruning edges with small weights (m(v,v)(t)<ϵm_{(v,v')}^{(t)} < \epsilon) yields succinct, interpretable representations.

The Evolutionary State Graph Network (EvoNet) extends standard GNNs to process these dynamic graphs with:

  • Local node-to-node message passing (FMP\mathcal{F}_{\mathrm{MP}}), parameterized as in GGNN or variants.
  • Temporal graph-level propagation and recurrent node–graph interaction (EvoBlock), including attention over critical time windows (αt\alpha_t).
  • End-to-end training for event prediction, with the model explicitly producing interpretable weights identifying state-transitions and moments most contributing to predicted events (Hu et al., 2019).

4. Event-Driven Modeling and Inference

In event-driven approaches, STE-ts enables efficient inference and feature extraction by restricting processing entirely to transitions, bypassing the need to process every raw sample. In non-intrusive load monitoring (NILM), the event-driven Factorial Hidden Markov Model (eFHMM) leverages detected transition events {ei}\{ e_i \} (e.g., extracted via change-point detection and feature segmentation) (Yan et al., 2021). Each event is annotated with: ei=(ts(i),tspike(i),te(i),ΔP(i),ΔQ(i),S(i))e_i = \big( t_s^{(i)}, t_{\mathrm{spike}}^{(i)}, t_e^{(i)}, \Delta P^{(i)}, \Delta Q^{(i)}, S^{(i)} \big)

An eFHMM models appliances mm, each with SmS_m possible states. The joint hidden states (zi(1),...,zi(M)z_i^{(1)}, ..., z_i^{(M)}) evolve with only one appliance changing per event, and emission distributions for transient and steady-state intervals are learned per transition type.

Inference proceeds in two stages:

  • Transient signature maximization: Identify candidate state transitions maximizing likelihood under event-specific emission and transition probabilities.
  • Steady signature verification: Accept candidate if post-event steady-state features match the learned mean within a confidence threshold; otherwise, backtrack to the next-best candidate.

The computation is linear in the number of events and states—contrasting sharply with time-driven FHMMs whose cost grows with the sample count—enabling real-time, scalable edge-cloud NILM solutions with significant data-rate reduction (Yan et al., 2021).

5. Experimental Evaluations and Interpretability

Experiments on simulated and real-world datasets demonstrate that STE-ts-based methods outperform or subsume standard metrics and sequencing tools:

  • The STH metric runs 3.5–14× faster than finely resampled Hamming distances on large-scale or high-frequency datasets, achieving up to 5,000× speedup as the sampling period approaches zero (Marié et al., 1 Dec 2025).
  • STH more accurately captures the temporal overlap and durations of matched states, with special cases equivalent to normalized Temporal Hamming (nTH) and Temporal Jaccard (TJ). Clustering performance on weather and sleep-stage datasets illustrates improved cluster purity and interpretive granularity through exclusion of ambiguous intervals and focus on selected states.
  • In dynamic graph modeling with EvoNet, F1/AUC results surpass 11 competitive baselines (e.g., BoP, SAX-VSM, HMM, MRNN, GCN-LSTM, EvolveGCN, Time2Graph) across domains including stock, network flow, and server-log series. Full dynamic graph modeling and node–graph attention mechanisms confer clear performance and interpretability gains, with interpretable edge weights highlighting transitions most correlated with anomalous events (Hu et al., 2019).

6. Applications and Outlook

STE-ts has broad applicability in domains where systems undergo explicit, timestamped transitions between well-defined states:

  • Monitoring and prediction in industrial, socio-economic, and natural systems
  • Pattern matching and clustering in genomics, neurophysiology, and environmental event series
  • Large-scale NILM by event-driven load disaggregation (Yan et al., 2021)
  • Explainable event-prediction from noisy multivariate signals using dynamic state graphs (Hu et al., 2019)

A key advantage is scalability: algorithms operate linearly in the number of events, unaffected by high sample rates or lengthy observation periods. Additionally, the ability to flexibly focus on or exclude specific states (including ambiguous or missing annotations) enables practical handling of real-world, partially annotated data—while preserving metric consistency when certain set constraints are respected (Marié et al., 1 Dec 2025).

This suggests future research may further unify event-centric and state-centric time series analysis, with growing adoption of STE-ts frameworks as data volumes and heterogeneity increase and demand for transparent, scalable, and actionable sequential pattern recognition intensifies.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to State Transition Event Timeseries (STE-ts).