On principles of large deviation and selected data compression (1604.06971v1)
Abstract: The Shannon Noiseless coding theorem (the data-compression principle) asserts that for an information source with an alphabet $\mathcal X={0,\ldots ,\ell -1}$ and an asymptotic equipartition property, one can reduce the number of stored strings $(x_0,\ldots ,x_{n-1})\in {\mathcal X}n$ to $\ell{nh}$ with an arbitrary small error-probability. Here $h$ is the entropy rate of the source (calculated to the base $\ell$). We consider further reduction based on the concept of utility of a string measured in terms of a rate of a weight function. The novelty of the work is that the distribution of memory is analyzed from a probabilistic point of view. A convenient tool for assessing the degree of reduction is a probabilistic large deviation principle. Assuming a Markov-type setting, we discuss some relevant formulas, including the case of a general alphabet.