Dual-Regime Absorbing Markov Chain
- Dual-Regime AMC is a finite discrete-time Markov chain with two transient regimes, each defined by unique transition and absorption dynamics.
- It employs sub-stochastic transition matrices and a boundary kernel for regime switching, enabling precise computation of absorption probabilities and time distributions.
- The framework supports applications in semantic information freshness, remote estimation, and computer vision by offering a tractable model for threshold-driven phase changes.
A dual-regime absorbing Markov chain (DR-AMC) is a finite discrete-time Markov chain architecture characterized by a piecewise regime structure: its transient state space is partitioned into two distinct operational regimes, each associated with its own transition (and absorption) dynamics. The process evolves according to one set of sub-stochastic transition and absorption matrices in Regime 1 for a fixed, possibly policy-determined, number of timesteps or until an absorption event; should it survive this phase, it undergoes a regime switch—via a boundary transition kernel—to Regime 2, where it continues to evolve until eventual absorption. This structure enables exact stochastic modeling of systems with threshold-driven or policy-induced phase changes, enabling closed-form computation of key performance metrics through a regime-aware extension of classical phase-type (PH) distributions. The DR-AMC framework provides analytical tractability and efficient parameterization for a broad class of applications in stochastic control, information freshness, remote estimation, and computer vision (Cosandal et al., 3 Dec 2025, Jiang et al., 2018, Cosandal et al., 14 Apr 2025, Yazicioglu, 2020).
1. Formal Definition and State Space Architecture
A DR-AMC is defined as a discrete-time Markov chain on a finite state space segmented into three disjoint subsets:
- Regime 1 transient states:
- Regime 2 transient states:
- Absorbing states:
The process initiates in a Regime 1 transient state according to an initial distribution . Transitions within Regime 1 are governed by (sub-stochastic on ), with absorption governed by (). If absorption does not occur prior to the deterministic switching time , the chain transitions into Regime 2 using a boundary transition matrix (). Subsequently, Regime 2 dynamics are determined by and of analogous dimensions. The structure of a DR-AMC is thus fully specified by the tuple (Cosandal et al., 3 Dec 2025).
2. Transition Matrices and Regime Switching
The regime-specific transition matrices are defined in block-canonical form:
- and dictate Regime 1 evolution and absorption to .
- and are defined analogously for Regime 2.
- Regime switching at time is accomplished by , satisfying for each .
- The initial distribution entering Regime 2 is .
The switching paradigm supports both time-based (deterministic ) and policy-based (stopping time or control) transitions, with typical analysis assuming deterministic thresholds (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).
3. Absorption Probabilities and Fundamental Matrices
Absorption in the DR-AMC is characterized piecewise. In Regime 1, the probability of absorption into each absorbing state prior to regime switch is:
If the chain transitions to Regime 2, the absorption probability vector is:
The above decomposes total absorption probability based on whether absorption occurs before or after the regime change. The construction leverages truncated sums for regime-1 dwell times and classic PH-fundamental matrix formulas in Regime 2. All moments and distributions of time to absorption (absorption time ) follow accordingly (Cosandal et al., 3 Dec 2025).
4. Dual-Regime Phase-Type Distribution of Absorption Time
Let denote the random absorption time. The dual-regime discrete phase-type (DR-DPH) distribution describes the law of :
This piecewise construction generalizes the single-regime PH distribution, capturing the two-phase absorption process. Higher moments—including factorial moments—can be derived in closed form, with ordinary moments assembled via Stirling number combinatorics. These explicit statistics enable renewal-reward evaluations of performance in threshold-driven control systems (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).
5. Applications in Information Freshness and Control
A principal application of DR-AMCs is in modeling threshold policies for semantic-aware freshness metrics such as Age of Incorrect Information (AoII). Here, Regime 1 models periods of no transmission (passive observation), and Regime 2 models aggressive update transmission upon exceeding an AoII threshold. The DR-AMC framework precisely quantifies:
- Distribution and moments of out-of-sync durations
- Expected cost for arbitrary AoII penalties
- Transmission costs and overall system performance
This analytic tractability supports semi-Markov decision process (SMDP) formulations for optimal remote estimation policies under transmission costs, outperforming single-threshold or randomized policies in empirical evaluation (Cosandal et al., 3 Dec 2025, Cosandal et al., 14 Apr 2025).
6. Comparative Regime Switching and Graph-Based Perspectives
DR-AMCs are naturally related to time-varying or switching Markov chains, as explored in graph-theoretic absorption analyses. In the context of multiple operating modes (e.g., two regimes), reachability and absorption can be characterized using union and intersection graphs constructed from each mode's non-absorbing transition structure. Key absorption conditions include:
- Stabilizability: existence of a state-feedback switching policy ensuring absorption
- Sufficient conditions under arbitrary regime switching: acyclicity, weak acyclicity, or distance contraction in the union/intersection graphs
These results provide complementary perspectives on absorption in DR-AMC-like systems where the regime is determined by policy or environmental events, further broadening the modeling power (Yazicioglu, 2020).
7. Dual-Regime AMC in Computer Vision: Saliency Detection
In image saliency detection, the DR-AMC paradigm has been instantiated as a pair of absorbing random walks on superpixel graphs—one regime tracking boundary-based absorption, the other tracking absorption to foreground priors. Absorption times in each regime serve as soft probabilistic cues (foreground and background possibility), which are then fused by solving a quadratic optimization problem. Multi-scale aggregation of DR-AMC saliency maps yields state-of-the-art empirical performance, demonstrating the DR-AMC’s utility in domains beyond classical stochastic control (Jiang et al., 2018).
Key References:
| Reference | Application Area | Notable Contribution |
|---|---|---|
| (Cosandal et al., 3 Dec 2025) | AoII minimization, remote estimation | Formal DR-AMC definition, absorption probabilities, DR-DPH formula, SMDP context |
| (Cosandal et al., 14 Apr 2025) | AoII control, threshold policy analysis | Cycle cost/stats computation using DR-AMC/DR-PH |
| (Yazicioglu, 2020) | Switching Markov chains, reachability analysis | Absorption criteria from union/intersection graph analysis |
| (Jiang et al., 2018) | Image saliency, computer vision | Bidirectional (dual-regime) AMC structure for multi-cue saliency detection |
The DR-AMC framework synthesizes piecewise Markovian temporal heterogeneity into analytically tractable models, facilitating unified stochastic analysis and tractable optimization in both control and inference domains.