Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Multilevel distillation of magic states for quantum computing (1210.3388v2)

Published 11 Oct 2012 in quant-ph

Abstract: We develop a procedure for distilling magic states used in universal quantum computing that requires substantially fewer initial resources than prior schemes. Our distillation circuit is based on a family of concatenated quantum codes that possess a transversal Hadamard operation, enabling each of these codes to distill the eigenstate of the Hadamard operator. A crucial result of this design is that low-fidelity magic states can be consumed to purify other high-fidelity magic states to even higher fidelity, which we call "multilevel distillation." When distilling in the asymptotic regime of infidelity $\epsilon \rightarrow 0$ for each input magic state, the number of input magic states consumed on average to yield an output state with infidelity $O(\epsilon{2r})$ approaches $2r+1$, which comes close to saturating the conjectured bound in [Phys. Rev. A 86, 052329]. We show numerically that there exist multilevel protocols such that the average number of magic states consumed to distill from error rate $\epsilon_{\mathrm{in}} = 0.01$ to $\epsilon_{\mathrm{out}}$ in the range $10{-5}$ to $10{-40}$ is about $14\log_{10}(1/\epsilon_{\mathrm{out}}) - 40$; the efficiency of multilevel distillation dominates all other reported protocols when distilling Hadamard magic states from initial infidelity 0.01 to any final infidelity below $10{-7}$. These methods are an important advance for magic-state distillation circuits in high-performance quantum computing, and they provide insight into the limitations of nearly resource-optimal quantum error correction.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)