Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantum Error Correction via Noise Guessing Decoding (2208.02744v3)

Published 4 Aug 2022 in quant-ph, cs.IT, and math.IT

Abstract: Quantum error correction codes (QECCs) play a central role in both quantum communications and quantum computation. Practical quantum error correction codes, such as stabilizer codes, are generally structured to suit a specific use, and present rigid code lengths and code rates. This paper shows that it is possible to both construct and decode QECCs that can attain the maximum performance of the finite blocklength regime, for any chosen code length when the code rate is sufficiently high. A recently proposed strategy for decoding classical codes called GRAND (guessing random additive noise decoding) opened doors to efficiently decode classical random linear codes (RLCs) performing near the maximum rate of the finite blocklength regime. By using noise statistics, GRAND is a noise-centric efficient universal decoder for classical codes, provided that a simple code membership test exists. These conditions are particularly suitable for quantum systems, and therefore the paper extends these concepts to quantum random linear codes (QRLCs), which were known to be possible to construct but whose decoding was not yet feasible. By combining QRLCs and a newly proposed quantum-GRAND, this work shows that it is possible to decode QECCs that are easy to adapt to changing conditions. The paper starts by assessing the minimum number of gates in the coding circuit needed to reach the QRLCs' asymptotic performance, and subsequently proposes a quantum-GRAND algorithm that makes use of quantum noise statistics, not only to build an adaptive code membership test, but also to efficiently implement syndrome decoding.

Citations (11)

Summary

We haven't generated a summary for this paper yet.