Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 82 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Machine Learning Message-Passing for the Scalable Decoding of QLDPC Codes (2408.07038v2)

Published 13 Aug 2024 in quant-ph

Abstract: We present Astra, a novel and scalable decoder using graph neural networks. Our decoder works similarly to solving a Sudoku puzzle of constraints represented by the Tanner graph. In general, Quantum Low Density Parity Check (QLDPC) decoding is based on Belief Propagation (BP, a variant of message-passing) and requires time intensive post-processing methods such as Ordered Statistics Decoding (OSD). Without using any post-processing, Astra achieves higher thresholds and better logical error rates when compared to BP+OSD, both for surface codes trained up to distance 11 and Bivariate Bicycle (BB) codes trained up to distance 18. Moreover, we can successfully extrapolate the decoding functionality: we decode high distances (surface code up to distance 25 and BB code up to distance 34) by using decoders trained on lower distances. Astra+OSD is faster than BP+OSD. We show that with decreasing physical error rates, Astra+OSD makes progressively fewer calls to OSD when compared to BP+OSD, even in the context of extrapolated decoding. Astra(+OSD) achieves orders of magnitude lower logical error rates for BB codes compared to BP(+OSD). The source code is open-sourced at \url{https://github.com/arshpreetmaan/astra}.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube