Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Specific Hierarchical Huffman Encoding (1307.0920v1)

Published 3 Jul 2013 in cs.IT, cs.DS, and math.IT

Abstract: In this paper, we revisit the classical data compression problem for domain specific texts. It is well-known that classical Huffman algorithm is optimal with respect to prefix encoding and the compression is done at character level. Since many data transfer are domain specific, for example, downloading of lecture notes, web-blogs, etc., it is natural to think of data compression in larger dimensions (i.e. word level rather than character level). Our framework employs a two-level compression scheme in which the first level identifies frequent patterns in the text using classical frequent pattern algorithms. The identified patterns are replaced with special strings and to acheive a better compression ratio the length of a special string is ensured to be shorter than the length of the corresponding pattern. After this transformation, on the resultant text, we employ classical Huffman data compression algorithm. In short, in the first level compression is done at word level and in the second level it is at character level. Interestingly, this two level compression technique for domain specific text outperforms classical Huffman technique. To support our claim, we have presented both theoretical and simulation results for domain specific texts.

Citations (4)

Summary

We haven't generated a summary for this paper yet.