Papers
Topics
Authors
Recent
Search
2000 character limit reached

Source Key Capacity in Secure Systems

Updated 19 January 2026
  • Source key capacity is a fundamental measure quantifying the maximum rate at which secret keys can be generated from correlated or sender-excited sources under adversarial constraints.
  • It integrates channel-type randomness from excitation actions and source-type randomness from residual observations, optimized via hybrid coding schemes.
  • It is applied in secure network coding, distributed storage, and quantum protocols, guiding trade-offs between reliability and secrecy.

Source key capacity is a central quantity in information-theoretic secret key agreement, secure network coding, and secure storage. It quantifies the maximum rate at which secret keys can be distilled from correlated, potentially excited, sources under various operational and adversarial models. The fundamental objects are blocklength asymptotics, mutual information functionals, combinatorial graph properties, and coding scheme designs. Source key capacity arises in sender-excited models, general source models, network coding, and graph-based secure storage, and is governed by both single-letter and multi-letter optimizations.

1. Sender-Excited Source Key Capacity: Single-Letter Theorem

In the sender-excited secret key agreement model (Chou et al., 2011), the source is influenced by the sender via an action alphabet SS. Observed variables XX, YY, and ZZ (at Alice, Bob, Eve) follow p(x,y,zs)p(x, y, z | s). Alice selects a message MM, maps it to an excitation sequence Sn=f(M)S^n = f(M), and observes XnX^n, while Bob and Eve see (Yn,Φ)(Y^n,\Phi) and (Zn,Φ)(Z^n,\Phi) after public discussion. The key capacity with potential cost constraints (Λ(s)\Lambda(s), E[n1iΛ(Si)]ΓE[n^{-1} \sum_i \Lambda(S_i)] \leq \Gamma) is given by

CSK(Γ)=maxp(w,u,s,x,y,z)[I(U;YW)I(U;ZW)],C_{SK}(\Gamma) = \max_{p(w,u,s,x,y,z)} [I(U;Y|W) - I(U;Z|W)],

where WW, UU are auxiliaries satisfying WU(S,X)(Y,Z)W - U - (S,X) - (Y,Z). In the degraded eavesdropper case ((S,X)YZ(S,X)-Y-Z), the optimal choice is W=W=\emptyset, U=(S,X)U=(S,X):

CSK(Γ)=maxp(s):E[Λ(S)]Γ[I(S,X;Y)I(S,X;Z)].C_{SK}(\Gamma) = \max_{p(s): E[\Lambda(S)] \leq \Gamma} [I(S, X; Y) - I(S, X; Z)].

This mutual information split reflects two key contributions:

  • Channel-type randomness (RchR_{\mathrm{ch}}): I(S;Y)I(S;Z)I(S;Y)-I(S;Z), wiretap coding rate from excitation actions.
  • Source-type randomness (RsrcR_{\mathrm{src}}): I(X;YS)I(X;ZS)I(X;Y|S)-I(X;Z|S), residual secret key from correlated observations given SnS^n. Hybrid coding combines wiretap excitation and leftover key extraction, with trade-offs illustrated in examples (e.g., binary on-off channels, optimal β\beta^* balancing both components).

2. Secret Key Capacity in General Source Models

For two terminals observing nn i.i.d. copies of a discrete source (X,Y)(X, Y), the classical single-letter SK capacity is given by

CSK(X,Y)=sup{RR is achievable}=I(XY),C_{SK}(X, Y) = \sup \{ R \,|\, R \text{ is achievable} \} = I(X \wedge Y),

where I(XY)I(X \wedge Y) is the Gács-Körner mutual information (Tyagi, 2013). The minimum required public communication rate for optimal key agreement is expressed using interactive common information:

R=CIi(X;Y)I(XY),R^* = CI_i(X;Y) - I(X \wedge Y),

where CIi(X;Y)CI_i(X;Y) is the interactive CI defined via multi-round protocols and Markov constraints. In binary symmetric sources (BSS), interaction does not reduce this rate, while in other sources, nontrivial interaction can strictly lower RR^*.

3. Network Coding and Multicast Source Key Capacity

For general noiseless networks G=(V,E)G = (V, E) with multi-source key dissemination (Langberg et al., 2022), the source key capacity Ck(G,S,D,B)C_k(G, S, D, \mathcal{B}) is the supremum of key rates achievable from a mixture of source random bits, under correctness and zero information leakage over any adversarial tap-set B\mathcal{B}:

Ck(G,S,D,B)minU:SU,DVU[eδ+(U)cemaxβδ+(U)eβce].C_k(G, S, D, \mathcal{B}) \leq \min_{U : S \subseteq U, D \subseteq V \setminus U} \left[ \sum_{e \in \delta^+(U)}c_e - \max_{\beta \subseteq \delta^+(U)} \sum_{e \in \beta}c_e \right].

Allowing the key to be any secure mixture of sources (not merely message bits) strictly increases capacity in multi-source regimes. For single sources, Ck=CsC_k = C_s (secure multicast capacity).

4. Source Key Capacity for Secure Storage over Graphs

In secure storage codes over graphs, multiple independent source symbols are stored at nodes, subject to access and security constraints (Li, 12 Jan 2026). Source key capacity is defined as the supremum CC of achievable ratios RZ=L/LZR_Z = L / L_Z where LL is the symbol size, LZL_Z is the shared key size:

C=sup{R:R achievable}.C = \sup \left\{ R : R \text{ achievable} \right\}.

Graph-theoretic conditions characterize extremal regimes:

  • C=1C=1: No characteristic graph G[k]G^{[k]} for any source kk has internal qualified edges.
  • C=1/MC=1/M (multi-source per edge): Same no-internal-edge condition for MM sources, with no node sharing all MM sources in its incident edges.
  • C=C = \infty (keyless): For every edge {Vi,Vj}\{V_i, V_j\}, the union of common-source sets spans the assigned sources. Combinatorial alignment schemes realize codes matching these capacities.

5. Compound, Degraded, and Quantum Source Key Capacity

For compound sources with unknown statistics and one-way public communication (Tavangaran et al., 2016, Boche et al., 2016), capacity expressions become multi-letter for general sources and single-letter for degraded ones. For a source-state set SS and marginals S^\widehat S,

Csk(S)=mins^S^{infrI(s^)I(Xs^;Yr)suptI(s^)I(Xs^;Zt)},C_{sk}(S) = \min_{\hat s \in \widehat{S}} \left\{ \inf_{r \in I(\hat s)} I(X_{\hat s}; Y_r) - \sup_{t \in I(\hat s)} I(X_{\hat s}; Z_t) \right\},

for degraded ordering Xs^YrZtX_{\hat{s}} \to Y_r \to Z_t. In quantum cqq compound sources, the forward key capacity for regular generating sets I\mathcal{I} is

K(I)=limk1ksupTUYinfρIk[I(U;BT,ρ)I(U;ET,ρ)].K_\to(\mathcal{I}) = \lim_{k \to \infty} \frac{1}{k} \sup_{T \leftarrow U \leftarrow Y} \inf_{\rho \in \mathcal{I}^{\otimes k}} [I(U;B|T,\rho) - I(U;E|T,\rho)].

Marginal state knowledge at the encoder can strictly improve achievable rates in irregular cases.

6. Operational Interpretation and Coding Schemes

The operational meaning of source key capacity arises via coding schemes that combine wiretap codes, Slepian–Wolf binning, leftover hash lemma extraction, and deliberate excitation (e.g., source action SS). In sender-excited models (Chou et al., 2011), capacity-optimal coding is realized by:

  • Wiretap code for excitation actions SnS^n: achieves RchR_{\mathrm{ch}}.
  • Common randomness extraction given SnS^n: via random-binning/universal hashing, achieving RsrcR_{\mathrm{src}}. Reliability and secrecy exponents characterize the exponential decay of key disagreement and leakage probabilities, and reveal inherent reliability-secrecy tradeoffs.

7. Specializations and Illustrative Examples

The unifying source key capacity framework accommodates classical wiretap ([Wyner], CSK=maxI(S;Y)I(S;Z)C_{SK} = \max I(S;Y)-I(S;Z)), pure source SK agreement ([Csiszár–Narayan], CSK=maxp(ux)[I(U;Y)I(U;Z)]C_{SK} = \max_{p(u|x)} [I(U;Y)-I(U;Z)]), and hybrid models. Practical numerical examples (binary on-off channels, network coding with mixing, graph storage assignments) demonstrate trade-offs, optimal designs, and extremal regimes. These contribute to the system-theoretic foundation for secure distributed storage, distributed computation, QKD, and networked key dissemination.


References:

  • "The Sender-Excited Secret Key Agreement Model: Capacity, Reliability and Secrecy Exponents" (Chou et al., 2011)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Source Key Capacity.