Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Machine Learning for Error Correction with Natural Redundancy (1910.07420v1)

Published 15 Oct 2019 in cs.IT and math.IT

Abstract: The persistent storage of big data requires advanced error correction schemes. The classical approach is to use error correcting codes (ECCs). This work studies an alternative approach, which uses the redundancy inherent in data itself for error correction. This type of redundancy, called Natural Redundancy (NR), is abundant in many types of uncompressed or even compressed files. The complex structures of Natural Redundancy, however, require machine learning techniques. In this paper, we study two fundamental approaches to use Natural Redundancy for error correction. The first approach, called Representation-Oblivious, requires no prior knowledge on how data are represented or compressed in files. It uses deep learning to detect file types accurately, and then mine Natural Redundancy for soft decoding. The second approach, called Representation-Aware, assumes that such knowledge is known and uses it for error correction. Furthermore, both approaches combine the decoding based on NR and ECCs. Both experimental results and analysis show that such an integrated scheme can substantially improve the error correction performance.

Citations (8)

Summary

We haven't generated a summary for this paper yet.