Papers
Topics
Authors
Recent
2000 character limit reached

Neural Decoders for Universal Quantum Algorithms (2509.11370v1)

Published 14 Sep 2025 in quant-ph

Abstract: Fault-tolerant quantum computing demands decoders that are fast, accurate, and adaptable to circuit structure and realistic noise. While ML decoders have demonstrated impressive performance for quantum memory, their use in algorithmic decoding - where logical gates create complex error correlations - remains limited. We introduce a modular attention-based neural decoder that learns gate-induced correlations and generalizes from training on random circuits to unseen multi-qubit algorithmic workloads. Our decoders achieve fast inference and logical error rates comparable to most-likely-error (MLE) decoders across varied circuit depths and qubit counts. Addressing realistic noise, we incorporate loss-resolving readout, yielding substantial gains when qubit loss is present. We further show that by tailoring the decoder to the structure of the algorithm and decoding only the relevant observables, we can simplify the decoder design without sacrificing accuracy. We validate our framework on multiple error correction codes - including surface codes and 2D color codes - and demonstrate state-of-the-art performance under circuit-level noise. Finally, we show that the use of attention offers interpretability by identifying the most relevant correlations being tracked by the decoder. Enabling experimental validation of deep-circuit fault-tolerant algorithms and architectures (Bluvstein et al., arXiv:2506.20661, 2025), these results establish neural decoders as practical, versatile, and high-performance tools for quantum computing.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 1 like.

Upgrade to Pro to view all of the tweets about this paper:

alphaXiv