Papers
Topics
Authors
Recent
Search
2000 character limit reached

Input-Driven Markov Typicality

Updated 23 January 2026
  • Input-driven Markov typicality is a framework that defines the empirical behavior of channel inputs and outputs in joint source–channel coding over Markov channels.
  • It leverages strictly-causal encoding to exploit the underlying Markov structure, leading to sharper characterizations of achievable empirical distributions.
  • The framework extends classical i.i.d.-based approaches by utilizing ergodic theory and joint covering/packing lemmas, thereby improving analysis for systems with memory.

Input-driven Markov typicality is a framework for analyzing the empirical behavior of sequences of channel inputs and outputs in joint source–channel coding over finite-state (Markov) channels, particularly in contexts with strictly-causal encoders and noncausal decoders. By directly exploiting the Markov structure induced by the encoder, as opposed to assuming blockwise independence as in classical discrete memoryless channel (DMC) settings, input-driven Markov typicality enables sharper characterization of the set of empirical joint distributions achievable by coding schemes. This approach underlies single-letter inner and outer bounds for empirical coordination—a generalized formulation of joint source–channel coding—over systems where the channel state evolves as a controlled Markov process driven by the code (Zhao et al., 16 Jan 2026).

1. System Setting and Problem Formulation

The canonical system involves:

  • A memoryless i.i.d. source Unt=1nPUU^n \sim \prod_{t=1}^n P_U on finite alphabet U\mathcal{U},
  • A finite-state channel (FSC) with latent states YtY_t evolving according to the controlled Markov kernel PYtYt1,Xt=WYX,Y(ytxt,yt1)P_{Y_t|Y_{t-1},X_t} = \mathsf{W}_{Y|X,Y'}(y_t|x_t,y_{t-1}), with known initial state Y0Y_0,
  • A strictly-causal encoder Xt=ft(Ut1)X_t = f_t(U^{t-1}) for t=1,,nt=1,\dots,n,
  • A noncausal decoder Vn=g(Yn)V^n = g(Y^n).

The joint distribution induced by an (n)(n)-code is

PUn,Xn,Yn,Vn=t=1nPU(ut)δxt=ft(ut1)WYX,Y(ytxt,yt1)δvn=g(yn).P_{U^n,X^n,Y^n,V^n} = \prod_{t=1}^n P_U(u_t)\, \delta_{x_t = f_t(u^{t-1})}\, \mathsf{W}_{Y|X,Y'}(y_t|x_t,y_{t-1})\, \delta_{v^n = g(y^n)}.

For empirical coordination, the object of study is the nn-type (empirical distribution) Qn(u,x,y,y,v)Q^n(u,x,y',y,v) counting frequency of (ut,xt,yt1,yt,vt)(u_t, x_t, y_{t-1}, y_t, v_t). A target distribution PU,X,Y,Y,V\mathbb{P}_{U,X,Y',Y,V} is achievable if, for every ϵ>0\epsilon > 0, there exists for large nn an (n)(n)-code such that with probability at least 1ϵ1-\epsilon, 1\ell_1-distance between QnQ^n and P\mathbb{P} is at most ϵ\epsilon. Under the standard unichain, irreducibility, and aperiodicity assumptions on the induced Markov process for YnY^n, there exists a unique stationary distribution πY\pi_Y satisfying

πY(y)=x,yπY(y)PX(x)WYX,Y(yx,y).\pi_Y(y) = \sum_{x,y'} \pi_Y(y') P_X(x) \mathsf{W}_{Y|X,Y'}(y|x,y').

2. Input-Driven Markov Typicality: Definitions

Input-driven Markov typicality is formally defined as follows:

  • The joint Markov-type of a pair (xn,yn)(x^n, y^n) is

QYXYn(i,x,j)=1nt=1n1{yt1=i,xt=x,yt=j}.Q^n_{Y'XY}(i,x,j) = \frac{1}{n} \sum_{t=1}^n \mathbf{1}\{y_{t-1}=i, x_t=x, y_t=j \}.

  • For ϵ>0\epsilon > 0, the ϵ\epsilon-typical set with respect to stationary distribution QYXY(i,x,j)=πY(i)PX(x)WYX,Y(jx,i)\mathbb{Q}_{Y'XY}(i,x,j) = \pi_Y(i)\, P_X(x)\, \mathsf{W}_{Y|X,Y'}(j|x,i) is

Tϵn(QYXY)={(xn,yn):QYXYnQYXY1ϵ}.\mathcal{T}_\epsilon^n(\mathbb{Q}_{Y'XY}) = \{ (x^n, y^n): \| Q^n_{Y'XY} - \mathbb{Q}_{Y'XY} \|_1 \leq \epsilon \}.

  • For fixed xnx^n, the conditional typical set is defined by constraining the empirical joint to be close in 1\ell_1-norm to πYQXnWYX,Y\pi_Y Q^n_X \mathsf{W}_{Y|X,Y'}, where QXn(x)Q^n_X(x) is the empirical type of xnx^n.

When the channel is memoryless (Y=Y' = \varnothing), input-driven Markov typicality reduces to classic strong joint-typicality for (Xn,Yn)(X^n, Y^n).

3. Fundamental Properties of Input-Driven Markov Typicality

  • Ergodicity: If the input sequence XtX_t is i.i.d.\ PXP_X, then (Xn,Yn)(X^n, Y^n) are with high probability (as nn \to \infty) jointly typical with respect to QYXY\mathbb{Q}_{Y'XY}. Specifically,

limnPr((Xn,Yn)Tδn(QYXY))=1,δ>0.\lim_{n \to \infty} \Pr\left((X^n, Y^n) \in \mathcal{T}^n_{\delta}(\mathbb{Q}_{Y'XY})\right) = 1, \quad \forall \delta > 0.

  • AEP and Cardinality: For every δ>0\delta > 0, there exist ϵ0\epsilon_0, n0n_0 such that for all ϵ<ϵ0\epsilon<\epsilon_0, n>n0n>n_0, and (xn,yn)Tϵn(Q)(x^n,y^n) \in \mathcal{T}^n_\epsilon(\mathbb{Q}),

2n(H(X,YY)+δ)<PXn(xn)t=1nW(ytxt,yt1)<2n(H(X,YY)δ),2^{-n(H(X,Y|Y')+\delta)} < P_X^{\otimes n}(x^n) \prod_{t=1}^n \mathsf{W}(y_t|x_t,y_{t-1}) < 2^{-n(H(X,Y|Y')-\delta)},

and Tϵn(QYXY)2n(H(X,YY)+δ)|\mathcal{T}^n_\epsilon(\mathbb{Q}_{Y'XY})| \leq 2^{n(H(X,Y|Y')+\delta)}.

  • Marginal and Conditional Typicality: If (xn,yn)Tϵn(QYXY)(x^n, y^n) \in \mathcal{T}^n_\epsilon(\mathbb{Q}_{Y'XY}) then the marginals xnTϵn(PX)x^n \in \mathcal{T}^n_\epsilon(P_X) and yny^n is typical for its own stationary distribution. The converse also holds with appropriate adjustment of the typicality parameter.

4. Achievability and Coding Theorems

The central inner bound for empirical coordination over Markov channels is as follows: A target PU,X,Y,Y,V\mathbb{P}_{U,X,Y',Y,V} is achievable if there exists an auxiliary finite variable WW such that the joint factorizes according to

PU,X,Y,Y,V=PU(u)PX(x)PWU,X(wu,x)πY(y)W(yx,y)PVY,X,W(vy,x,w),\mathbb{P}_{U,X,Y',Y,V} = P_U(u)\,P_X(x)\,P_{W|U,X}(w|u,x)\,\pi_Y(y')\,\mathsf{W}(y|x,y')\,P_{V|Y,X,W}(v|y,x,w),

and satisfies the single-letter constraint

I(X;YY)I(U;WX)0.I(X;Y|Y') - I(U;W|X) \geq 0.

Achievability is demonstrated via a block-Markov coordination coding scheme involving

  • Codebook generation with 2nR2^{nR} i.i.d. Xn(m)X^n(m) sequences and, for each mm, 2nR2^{nR} Wn(m,m^)W^n(m, \hat m) sequences,
  • Covering and packing arguments for source/auxiliary variables and channel outputs, respectively,
  • Error analysis leveraging covering lemmas (for R>I(U;WX)R > I(U;W|X)), Markov-typicality, and a two-stage joint packing lemma (for R<I(X;YY)R < I(X;Y|Y')).

The blockwise Markov property is explicitly maintained by passing the channel's state at the end of one block as the initial state for the next. This mechanism, together with input-driven Markov typicality, extends beyond the bounds given by i.i.d.-based type arguments for DMCs (Zhao et al., 16 Jan 2026).

5. Converse Bounds and Necessity

Any (n)(n)-code that achieves empirical coordination with target PU,X,Y,Y,V\mathbb{P}_{U,X,Y',Y,V} must satisfy the same single-letter information constraint. By an argument standard in information theory (Csiszár–Körner chain-rule techniques), with proper auxiliary time-sharing variables, one shows

I(X;YY)I(U;WX)0,I(X;Y|Y') - I(U;W|X) \geq 0,

with the induced joint law

PU,X,Y,Y,W,V=PUPXPYXWYX,YPWU,X,Y,YPVY,X,W.P_{U,X,Y',Y,W,V} = P_U\,P_X\,P_{Y'|X}\,\mathsf{W}_{Y|X,Y'}\,P_{W|U,X,Y',Y}\,P_{V|Y,X,W}.

This outer bound matches the achievable region described via the inner bound when the Markov structure is accurately captured. The analytic techniques involve ergodic theory for finite-state Markov chains and joint covering/packing lemmas specialized to block-Markov dependent codewords.

6. Relations to Classical Cases and Illustrative Examples

Several special cases and examples highlight the significance of input-driven Markov typicality:

  • DMC reduction (Y=Y' = \emptyset): The framework recovers classic strictly-causal coordination results for discrete memoryless channels, as in [Cuff–Schieler 2011]. The Markov-typical sets reduce to strong joint-typicality in the i.i.d. (memoryless) setting.
  • Source–Channel Separation: For statistically independent (U,V)(U,V) and (X,Y)(X,Y), I(U;WX)I(U;V)I(U;W|X) \geq I(U;V), and the achievable region reduces to the channel mutual information exceeding the source rate constraint.
  • Binary-Input Markov Channel: Consider Yt=XtYt1ZtY_t = X_t \oplus Y_{t-1} \oplus Z_t, ZtBern(p)Z_t \sim \mathrm{Bern}(p). Evaluation of I(X;YY)I(X;Y|Y') is possible and, when pp is small, input-driven Markov typicality yields strictly larger achievable regions than i.i.d. block bounds based on I(X;Y)I(X;Y), illustrating strict improvement over independence-based analyses.

7. Connections to Literature and Methodological Innovations

Input-driven Markov typicality extends approaches used in empirical coordination over DMCs [Cuff–Zhao 2011, Le Treust–Oechtering 2017], integrating ergodic Markov process techniques with classical type-based covering and packing arguments [Csiszár–Körner 2011]. Unlike classical schemes relying on blockwise independence, this method directly exploits the controlled Markov property induced by strictly-causal encoding, enabling a more accurate characterization of achievable empirical distributions (Zhao et al., 16 Jan 2026).

The proof techniques rely critically on the ergodic theorem for finite-state Markov chains and two-stage joint packing lemmas (conditioning on boundary states), as detailed in the appendices of (Zhao et al., 16 Jan 2026). This framework provides a canonical methodology for joint source–channel coding design in systems with memory and strictly-causal state evolution, with demonstrated benefits over existing independence-based analyses.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Input-Driven Markov Typicality.