Strong Chain Rules for Min-Entropy under Few Bits Spoiled (1702.08476v1)
Abstract: It is well established that the notion of min-entropy fails to satisfy the \emph{chain rule} of the form $H(X,Y) = H(X|Y)+H(Y)$, known for Shannon Entropy. Such a property would help to analyze how min-entropy is split among smaller blocks. Problems of this kind arise for example when constructing extractors and dispersers. We show that any sequence of variables exhibits a very strong strong block-source structure (conditional distributions of blocks are nearly flat) when we \emph{spoil few correlated bits}. This implies, conditioned on the spoiled bits, that \emph{splitting-recombination properties} hold. In particular, we have many nice properties that min-entropy doesn't obey in general, for example strong chain rules, "information can't hurt" inequalities, equivalences of average and worst-case conditional entropy definitions and others. Quantitatively, for any sequence $X_1,\ldots,X_t$ of random variables over an alphabet $\mathcal{X}$ we prove that, when conditioned on $m = t\cdot O( \log\log|\mathcal{X}| + \log\log(1/\epsilon) + \log t)$ bits of auxiliary information, all conditional distributions of the form $X_i|X_{<i}$ are $\epsilon$-close to be nearly flat (only a constant factor away). The argument is combinatorial (based on simplex coverings). This result may be used as a generic tool for \emph{exhibiting block-source structures}. We demonstrate this by reproving the fundamental converter due to Nisan and Zuckermann (\emph{J. Computer and System Sciences, 1996}), which shows that sampling blocks from a min-entropy source roughly preserves the entropy rate. Our bound implies, only by straightforward chain rules, an additive loss of $o(1)$ (for sufficiently many samples), which qualitatively meets the first tighter analysis of this problem due to Vadhan (\emph{CRYPTO'03}), obtained by large deviation techniques.