Computational Quantum Capacities
- Computational quantum capacities are defined as the maximum rate of reliable quantum information transmission through noisy channels, integrating entanglement and computational resource limits.
- Recent research highlights superadditivity where multiple channel uses reveal capacities undetectable in single-letter analyses, making capacity detection computationally challenging.
- Advanced methods like semidefinite programming and iterative algorithms offer practical approaches to evaluating capacities under finite-resource and encoding/decoding complexity constraints.
Computational quantum capacities quantify the ultimate rates at which quantum information can be reliably transmitted through noisy quantum channels, subject to both physical noise and—crucially in modern formulations—computational resource constraints. Unlike their classical counterparts, quantum capacities are generically non-additive and require regularization over arbitrarily many channel uses, reflecting the deep role of entanglement across channel blocks. Recent research has demonstrated that detecting nonzero quantum capacity may necessitate consideration of an unbounded number of channel uses, potentially rendering the problem algorithmically undecidable. This article describes the mathematical foundations, superadditivity phenomena, additivity classes, finite-resource analyses, advanced computational techniques, and the implications of resource-limited encoders and decoders—all central to the emerging subfield of computational quantum capacities.
1. Mathematical Definition and Regularization of Quantum Capacity
Given a quantum channel (completely positive trace-preserving map), the quantum capacity is the supremum rate (in qubits per use) at which quantum states or entanglement can be transmitted with vanishing error in the asymptotic limit. Operationally, the key informational quantity is the coherent information: where is a purification of . Setting , the celebrated Lloyd-Shor-Devetak theorem yields the regularized capacity formula: reflecting the possibility that entanglement across multiple uses can strictly increase the achievable rate over product encodings (Cubitt et al., 2014).
2. Unbounded Superadditivity and Computational Barriers
Contrary to the additive nature of classical mutual information, quantum coherent information can be strictly superadditive: for certain and channels. Early examples displayed superadditivity for up to 33 copies; more striking is superactivation, wherein two zero-capacity channels and satisfy but . The main result in "Unbounded number of channel uses are required to see quantum capacity" proves that, for every , there exists a channel with but (Cubitt et al., 2014). This establishes, for the first time, that no finite blocklength truncation suffices to decide positivity of quantum capacity in general, implying a stark computational intractability and linking capacity detection to uncomputable problems in quantum information.
3. Constructive Channel Models Underlying Superadditivity
Key explicit constructions exploit "switched" channels, combining erasure channels (which randomly erase the input) with PPT-binding (private bit hiding) channels whose Choi states are approximate "pbits." In these models, a classical switch determines, per channel use, whether the input is routed through an erasure or a pbit-hiding branch. By careful design of parameters (switching probabilities, shield sizes, etc.), it is shown that, for any finite , entangled coding over uses cannot generate positive coherent information, but entanglement over uses can—thus achieving nonzero asymptotic capacity while all finite truncations fail to reveal it (Cubitt et al., 2014). The proof leverages conditional entropy bounds (Alicki-Fannes) and classical coding over the channel branches.
4. Additivity Classes and Tractable Quantum Capacity Regions
Although the complexity of capacity detection is generically unbounded, significant classes of channels exhibit coherent-information additivity—i.e., —permitting exact, single-letter computation. Degradable channels exemplify this property, as do new non-degradable families (flagged mixtures and direct sums with anti-degradable partners) identified in Smith & Wu's work (Smith et al., 2024). Here, suitable "dominating" degradable components or weakened degradability-like conditions are sufficient to guarantee strong or weak additivity. Platypus-type channels also display additivity outside the conventional degradable/PPT classes. The boundaries of these classes mark the frontier between tractable and superadditive regimes.
5. Resource Constraints: Computational Quantum Capacities
Traditional capacity definitions ignore encoding/decoding complexity. By imposing polynomial-time constraints on encoder/decoder (gate complexity), the computational quantum capacity quantifies rates achievable with efficient protocols. Recent work introduces the computational two-way quantum capacity : the maximal rate of entanglement transmission under circuits of polynomial size (Meyer et al., 21 Jan 2026). Under standard cryptographic assumptions (quantum-secure one-way functions), channels exist whose standard two-way capacity is near-maximal, yet whose computational capacity collapses to zero. There is a sharp transition: polynomial-description channels permit efficient entanglement distillation, while superpolynomial complexity erases computational capacity.
6. Numerical and Algorithmic Approaches
Given the uncomputability in the general setting, computational quantum capacity research leverages several tools:
- Semidefinite programs (SDP): Efficient for bounding coherent information, especially for single-letter and orthogonal-ensemble capacity measures (see (Wang, 2022)).
- Flagged extensions: By embedding the channel in a higher-dimensional flagged space, upper bounds can be computed exactly in single-letter form for degradable extensions (Nourozi, 3 Jun 2025).
- Gradient and perturbative methods: Local perturbations certify non-optimal states and establish superadditivity thresholds, facilitating high-precision capacity estimation in practice (Wu et al., 22 Jul 2025).
- Concave-convex and iterative algorithms: Blahut-Arimoto-type iterative schemes allow convergence to capacity values under restricted classes (e.g., cq-channels, less noisy channels) (Ramakrishnan et al., 2019, Sutter et al., 2014).
- Finite-resource quantum coding theory: Second-order expansions and dispersion estimates enable precise performance evaluation for blocklength-limited codes (Tomamichel et al., 2015).
7. Implications and Open Problems
The existence of channels with infinite-use-only capacity and the algorithmic hardness of general capacity detection recast quantum Shannon theory as a computationally fragmented landscape. Fundamental questions include:
- Characterization of additive/nonadditive regions: What structural properties guarantee tractable capacity computation?
- Algorithmic upper and lower bounds: Can tighter bounds be devised for highly nonadditive, non-degradable channels?
- Efficient quantum coding under complexity constraints: How does encoding/decoding gate complexity reshape networked quantum communications?
- Explicit constructions of hard instances: What are the minimal examples of computationally undecidable capacity detection?
These lines of inquiry define the future of computational quantum capacity research, anchoring practical code design, quantum cryptography, and the theoretical understanding of quantum information transmission limits (Cubitt et al., 2014, Smith et al., 2024, Nourozi, 3 Jun 2025, Tomamichel et al., 2015, Meyer et al., 21 Jan 2026).