Papers
Topics
Authors
Recent
2000 character limit reached

Autonomous quantum error correction beyond break-even and its metrological application (2509.26042v1)

Published 30 Sep 2025 in quant-ph

Abstract: The ability to extend the lifetime of a logical qubit beyond that of the best physical qubit available within the same system, i.e., the break-even point, is a prerequisite for building practical quantum computers. So far, this point has been exceeded through active quantum error correction (QEC) protocols, where a logical error is corrected by measuring its syndrome and then performing an adaptive correcting operation. Autonomous QEC (AQEC), without the need for such resource-consuming measurement-feedback control, has been demonstrated in several experiments, but none of which has unambiguously reached the break-even point. Here, we present an unambiguous demonstration of beyond-break-even AQEC in a circuit quantum electrodynamics system, where a photonic logical qubit encoded in a superconducting microwave cavity is protected against photon loss through autonomous error correction, enabled by engineered dissipation. Under the AQEC protection, the logical qubit achieves a lifetime surpassing that of the best physical qubit available in the system by 18\%. We further employ this AQEC protocol to enhance the precision for measuring a slight frequency shift, achieving a metrological gain of 6.3 dB over that using the most robust Fock-state superposition. These results illustrate that the demonstrated AQEC procedure not only represents a crucial step towards fault-tolerant quantum computation but also offers advantages for building robust quantum sensors.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.