Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Results on the Fundamental Gain of Memory-Assisted Universal Source Coding (1205.4338v1)

Published 19 May 2012 in cs.IT and math.IT

Abstract: Many applications require data processing to be performed on individual pieces of data which are of finite sizes, e.g., files in cloud storage units and packets in data networks. However, traditional universal compression solutions would not perform well over the finite-length sequences. Recently, we proposed a framework called memory-assisted universal compression that holds a significant promise for reducing the amount of redundant data from the finite-length sequences. The proposed compression scheme is based on the observation that it is possible to learn source statistics (by memorizing previous sequences from the source) at some intermediate entities and then leverage the memorized context to reduce redundancy of the universal compression of finite-length sequences. We first present the fundamental gain of the proposed memory-assisted universal source coding over conventional universal compression (without memorization) for a single parametric source. Then, we extend and investigate the benefits of the memory-assisted universal source coding when the data sequences are generated by a compound source which is a mixture of parametric sources. We further develop a clustering technique within the memory-assisted compression framework to better utilize the memory by classifying the observed data sequences from a mixture of parametric sources. Finally, we demonstrate through computer simulations that the proposed joint memorization and clustering technique can achieve up to 6-fold improvement over the traditional universal compression technique when a mixture of non-binary Markov sources is considered.

Citations (20)

Summary

We haven't generated a summary for this paper yet.