Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convolutional Entanglement Distillation (0708.3699v2)

Published 28 Aug 2007 in quant-ph, cs.IT, and math.IT

Abstract: We develop a theory of entanglement distillation that exploits a convolutional coding structure. We provide a method for converting an arbitrary classical binary or quaternary convolutional code into a convolutional entanglement distillation protocol. The imported classical convolutional code does not have to be dual-containing or self-orthogonal. The yield and error-correcting properties of such a protocol depend respectively on the rate and error-correcting properties of the imported classical convolutional code. A convolutional entanglement distillation protocol has several other benefits. Two parties sharing noisy ebits can distill noiseless ebits ``online'' as they acquire more noisy ebits. Distillation yield is high and decoding complexity is simple for a convolutional entanglement distillation protocol. Our theory of convolutional entanglement distillation reduces the problem of finding a good convolutional entanglement distillation protocol to the well-established problem of finding a good classical convolutional code.

Citations (19)

Summary

We haven't generated a summary for this paper yet.