Input-Driven Markov Typicality
- Input-driven Markov typicality is a framework that defines the empirical behavior of channel inputs and outputs in joint source–channel coding over Markov channels.
- It leverages strictly-causal encoding to exploit the underlying Markov structure, leading to sharper characterizations of achievable empirical distributions.
- The framework extends classical i.i.d.-based approaches by utilizing ergodic theory and joint covering/packing lemmas, thereby improving analysis for systems with memory.
Input-driven Markov typicality is a framework for analyzing the empirical behavior of sequences of channel inputs and outputs in joint source–channel coding over finite-state (Markov) channels, particularly in contexts with strictly-causal encoders and noncausal decoders. By directly exploiting the Markov structure induced by the encoder, as opposed to assuming blockwise independence as in classical discrete memoryless channel (DMC) settings, input-driven Markov typicality enables sharper characterization of the set of empirical joint distributions achievable by coding schemes. This approach underlies single-letter inner and outer bounds for empirical coordination—a generalized formulation of joint source–channel coding—over systems where the channel state evolves as a controlled Markov process driven by the code (Zhao et al., 16 Jan 2026).
1. System Setting and Problem Formulation
The canonical system involves:
- A memoryless i.i.d. source on finite alphabet ,
- A finite-state channel (FSC) with latent states evolving according to the controlled Markov kernel , with known initial state ,
- A strictly-causal encoder for ,
- A noncausal decoder .
The joint distribution induced by an -code is
For empirical coordination, the object of study is the -type (empirical distribution) counting frequency of . A target distribution is achievable if, for every , there exists for large an -code such that with probability at least , -distance between and is at most . Under the standard unichain, irreducibility, and aperiodicity assumptions on the induced Markov process for , there exists a unique stationary distribution satisfying
2. Input-Driven Markov Typicality: Definitions
Input-driven Markov typicality is formally defined as follows:
- The joint Markov-type of a pair is
- For , the -typical set with respect to stationary distribution is
- For fixed , the conditional typical set is defined by constraining the empirical joint to be close in -norm to , where is the empirical type of .
When the channel is memoryless (), input-driven Markov typicality reduces to classic strong joint-typicality for .
3. Fundamental Properties of Input-Driven Markov Typicality
- Ergodicity: If the input sequence is i.i.d.\ , then are with high probability (as ) jointly typical with respect to . Specifically,
- AEP and Cardinality: For every , there exist , such that for all , , and ,
and .
- Marginal and Conditional Typicality: If then the marginals and is typical for its own stationary distribution. The converse also holds with appropriate adjustment of the typicality parameter.
4. Achievability and Coding Theorems
The central inner bound for empirical coordination over Markov channels is as follows: A target is achievable if there exists an auxiliary finite variable such that the joint factorizes according to
and satisfies the single-letter constraint
Achievability is demonstrated via a block-Markov coordination coding scheme involving
- Codebook generation with i.i.d. sequences and, for each , sequences,
- Covering and packing arguments for source/auxiliary variables and channel outputs, respectively,
- Error analysis leveraging covering lemmas (for ), Markov-typicality, and a two-stage joint packing lemma (for ).
The blockwise Markov property is explicitly maintained by passing the channel's state at the end of one block as the initial state for the next. This mechanism, together with input-driven Markov typicality, extends beyond the bounds given by i.i.d.-based type arguments for DMCs (Zhao et al., 16 Jan 2026).
5. Converse Bounds and Necessity
Any -code that achieves empirical coordination with target must satisfy the same single-letter information constraint. By an argument standard in information theory (Csiszár–Körner chain-rule techniques), with proper auxiliary time-sharing variables, one shows
with the induced joint law
This outer bound matches the achievable region described via the inner bound when the Markov structure is accurately captured. The analytic techniques involve ergodic theory for finite-state Markov chains and joint covering/packing lemmas specialized to block-Markov dependent codewords.
6. Relations to Classical Cases and Illustrative Examples
Several special cases and examples highlight the significance of input-driven Markov typicality:
- DMC reduction (): The framework recovers classic strictly-causal coordination results for discrete memoryless channels, as in [Cuff–Schieler 2011]. The Markov-typical sets reduce to strong joint-typicality in the i.i.d. (memoryless) setting.
- Source–Channel Separation: For statistically independent and , , and the achievable region reduces to the channel mutual information exceeding the source rate constraint.
- Binary-Input Markov Channel: Consider , . Evaluation of is possible and, when is small, input-driven Markov typicality yields strictly larger achievable regions than i.i.d. block bounds based on , illustrating strict improvement over independence-based analyses.
7. Connections to Literature and Methodological Innovations
Input-driven Markov typicality extends approaches used in empirical coordination over DMCs [Cuff–Zhao 2011, Le Treust–Oechtering 2017], integrating ergodic Markov process techniques with classical type-based covering and packing arguments [Csiszár–Körner 2011]. Unlike classical schemes relying on blockwise independence, this method directly exploits the controlled Markov property induced by strictly-causal encoding, enabling a more accurate characterization of achievable empirical distributions (Zhao et al., 16 Jan 2026).
The proof techniques rely critically on the ergodic theorem for finite-state Markov chains and two-stage joint packing lemmas (conditioning on boundary states), as detailed in the appendices of (Zhao et al., 16 Jan 2026). This framework provides a canonical methodology for joint source–channel coding design in systems with memory and strictly-causal state evolution, with demonstrated benefits over existing independence-based analyses.