Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dual-Regime Absorbing Markov Chain

Updated 11 December 2025
  • Dual-Regime AMC is a finite discrete-time Markov chain with two transient regimes, each defined by unique transition and absorption dynamics.
  • It employs sub-stochastic transition matrices and a boundary kernel for regime switching, enabling precise computation of absorption probabilities and time distributions.
  • The framework supports applications in semantic information freshness, remote estimation, and computer vision by offering a tractable model for threshold-driven phase changes.

A dual-regime absorbing Markov chain (DR-AMC) is a finite discrete-time Markov chain architecture characterized by a piecewise regime structure: its transient state space is partitioned into two distinct operational regimes, each associated with its own transition (and absorption) dynamics. The process evolves according to one set of sub-stochastic transition and absorption matrices in Regime 1 for a fixed, possibly policy-determined, number of timesteps or until an absorption event; should it survive this phase, it undergoes a regime switch—via a boundary transition kernel—to Regime 2, where it continues to evolve until eventual absorption. This structure enables exact stochastic modeling of systems with threshold-driven or policy-induced phase changes, enabling closed-form computation of key performance metrics through a regime-aware extension of classical phase-type (PH) distributions. The DR-AMC framework provides analytical tractability and efficient parameterization for a broad class of applications in stochastic control, information freshness, remote estimation, and computer vision (Cosandal et al., 3 Dec 2025, Jiang et al., 2018, Cosandal et al., 14 Apr 2025, Yazicioglu, 2020).

1. Formal Definition and State Space Architecture

A DR-AMC is defined as a discrete-time Markov chain {Yt}t≥0\{Y_t\}_{t \geq 0} on a finite state space segmented into three disjoint subsets:

  • Regime 1 transient states: {1,2,…,K1}\{1, 2, \dots, K_1\}
  • Regime 2 transient states: {1′,2′,…,K2′}\{1', 2', \dots, K_2'\}
  • Absorbing states: {a1,a2,…,aL}\{a_1, a_2, \dots, a_L\}

The process initiates in a Regime 1 transient state according to an initial distribution β1\bm{\beta}_1. Transitions within Regime 1 are governed by A1\bm{A}_1 (sub-stochastic on K1×K1K_1 \times K_1), with absorption governed by B1\bm{B}_1 (K1×LK_1 \times L). If absorption does not occur prior to the deterministic switching time τ\tau, the chain transitions into Regime 2 using a boundary transition matrix Θ\bm{\Theta} (K1×K2K_1 \times K_2). Subsequently, Regime 2 dynamics are determined by A2\bm{A}_2 and B2\bm{B}_2 of analogous dimensions. The structure of a DR-AMC is thus fully specified by the tuple (β1,τ,Θ,A1,A2,B1,B2)(\bm{\beta}_1, \tau, \bm{\Theta}, \bm{A}_1, \bm{A}_2, \bm{B}_1, \bm{B}_2) (Cosandal et al., 3 Dec 2025).

2. Transition Matrices and Regime Switching

The regime-specific transition matrices are defined in block-canonical form:

P1=(A1B1 0IL),P2=(A2B2 0IL)P_1 = \begin{pmatrix} A_1 & B_1 \ 0 & I_L \end{pmatrix}, \quad P_2 = \begin{pmatrix} A_2 & B_2 \ 0 & I_L \end{pmatrix}

  • A1∈RK1×K1A_1 \in \mathbb{R}^{K_1 \times K_1} and B1∈RK1×LB_1 \in \mathbb{R}^{K_1 \times L} dictate Regime 1 evolution and absorption to {a1,...,aL}\{a_1, ..., a_L\}.
  • A2∈RK2×K2A_2 \in \mathbb{R}^{K_2 \times K_2} and B2∈RK2×LB_2 \in \mathbb{R}^{K_2 \times L} are defined analogously for Regime 2.
  • Regime switching at time t=τ−1t = \tau - 1 is accomplished by Θ\bm{\Theta}, satisfying ∑jθij=1\sum_j \theta_{ij} = 1 for each ii.
  • The initial distribution entering Regime 2 is β2=β1A1τ−1Θ\bm{\beta}_2 = \bm{\beta}_1 A_1^{\tau-1} \bm{\Theta}.

The switching paradigm supports both time-based (deterministic Ï„\tau) and policy-based (stopping time or control) transitions, with typical analysis assuming deterministic thresholds (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).

3. Absorption Probabilities and Fundamental Matrices

Absorption in the DR-AMC is characterized piecewise. In Regime 1, the probability of absorption into each absorbing state prior to regime switch is:

σ1=β1(I−A1τ−1)(I−A1)−1B1\bm{\sigma}_1 = \bm{\beta}_1 (I - A_1^{\tau-1}) (I - A_1)^{-1} B_1

If the chain transitions to Regime 2, the absorption probability vector is:

σ2=β2(I−A2)−1B2\bm{\sigma}_2 = \bm{\beta}_2 (I - A_2)^{-1} B_2

The above decomposes total absorption probability based on whether absorption occurs before or after the regime change. The construction leverages truncated sums for regime-1 dwell times and classic PH-fundamental matrix formulas in Regime 2. All moments and distributions of time to absorption (absorption time TT) follow accordingly (Cosandal et al., 3 Dec 2025).

4. Dual-Regime Phase-Type Distribution of Absorption Time

Let TT denote the random absorption time. The dual-regime discrete phase-type (DR-DPH) distribution describes the law of TT:

pT(t)={β1A1t−1(1−A11),1≤t<τ β2A2t−τ(1−A21),t≥τp_T(t) = \begin{cases} \bm{\beta}_1 A_1^{t-1} (\bm{1} - A_1 \bm{1}), & 1 \leq t < \tau \ \bm{\beta}_2 A_2^{t-\tau} (\bm{1} - A_2 \bm{1}), & t \geq \tau \end{cases}

This piecewise construction generalizes the single-regime PH distribution, capturing the two-phase absorption process. Higher moments—including factorial moments—can be derived in closed form, with ordinary moments assembled via Stirling number combinatorics. These explicit statistics enable renewal-reward evaluations of performance in threshold-driven control systems (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).

5. Applications in Information Freshness and Control

A principal application of DR-AMCs is in modeling threshold policies for semantic-aware freshness metrics such as Age of Incorrect Information (AoII). Here, Regime 1 models periods of no transmission (passive observation), and Regime 2 models aggressive update transmission upon exceeding an AoII threshold. The DR-AMC framework precisely quantifies:

  • Distribution and moments of out-of-sync durations
  • Expected cost for arbitrary AoII penalties g(n)g(n)
  • Transmission costs and overall system performance

This analytic tractability supports semi-Markov decision process (SMDP) formulations for optimal remote estimation policies under transmission costs, outperforming single-threshold or randomized policies in empirical evaluation (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).

6. Comparative Regime Switching and Graph-Based Perspectives

DR-AMCs are naturally related to time-varying or switching Markov chains, as explored in graph-theoretic absorption analyses. In the context of multiple operating modes (e.g., two regimes), reachability and absorption can be characterized using union and intersection graphs constructed from each mode's non-absorbing transition structure. Key absorption conditions include:

  • Stabilizability: existence of a state-feedback switching policy ensuring absorption
  • Sufficient conditions under arbitrary regime switching: acyclicity, weak acyclicity, or distance contraction in the union/intersection graphs

These results provide complementary perspectives on absorption in DR-AMC-like systems where the regime is determined by policy or environmental events, further broadening the modeling power (Yazicioglu, 2020).

7. Dual-Regime AMC in Computer Vision: Saliency Detection

In image saliency detection, the DR-AMC paradigm has been instantiated as a pair of absorbing random walks on superpixel graphs—one regime tracking boundary-based absorption, the other tracking absorption to foreground priors. Absorption times in each regime serve as soft probabilistic cues (foreground and background possibility), which are then fused by solving a quadratic optimization problem. Multi-scale aggregation of DR-AMC saliency maps yields state-of-the-art empirical performance, demonstrating the DR-AMC’s utility in domains beyond classical stochastic control (Jiang et al., 2018).


Key References:

Reference Application Area Notable Contribution
(Cosandal et al., 3 Dec 2025) AoII minimization, remote estimation Formal DR-AMC definition, absorption probabilities, DR-DPH formula, SMDP context
(Cosandal et al., 14 Apr 2025) AoII control, threshold policy analysis Cycle cost/stats computation using DR-AMC/DR-PH
(Yazicioglu, 2020) Switching Markov chains, reachability analysis Absorption criteria from union/intersection graph analysis
(Jiang et al., 2018) Image saliency, computer vision Bidirectional (dual-regime) AMC structure for multi-cue saliency detection

The DR-AMC framework synthesizes piecewise Markovian temporal heterogeneity into analytically tractable models, facilitating unified stochastic analysis and tractable optimization in both control and inference domains.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dual-Regime Absorbing Markov Chain (DR-AMC).