Fusion-Based Quantum Computing (FBQC)
- Fusion-based quantum computing is a paradigm that performs universal computation by fusing small, fixed-size entangled resource states through destructive multi-qubit measurements.
- It leverages the stabilizer formalism and modular photonic hardware to dynamically grow large-scale entanglement with low operational depth and robust error correction.
- Architectural designs such as the 4-Star and 6-Ring networks demonstrate FBQC’s ability to achieve high error thresholds and scalable, latency-tolerant quantum processing.
Fusion-based quantum computing (FBQC) is a computational paradigm in which universal quantum computation is carried out by performing entangling multi-qubit measurements—called fusions—across the qubits of small, constant-sized entangled resource states. Instead of preparing a large cluster state in advance (as in traditional measurement-based quantum computing), FBQC incrementally “grows” large-scale entanglement by joining together these resource states via fusion measurements, constructing a fusion network that encodes logical qubits, error correction, and computation. The framework is notable for its compatibility with photonic hardware, low operational depth per qubit, and modular, scalable architectures. By leveraging the stabilizer formalism, FBQC enables rigorous analysis of fault tolerance, error thresholds, and the propagation of physical errors through entangling measurements, particularly in implementations affected by loss and probabilistic fusion operations such as those in linear optical quantum computing (Bartolucci et al., 2021).
1. Model Structure and Core Operations
FBQC relies on two primitive operations:
- Generation of small, fixed-size entangled resource states (e.g., four- or six-qubit GHZ, ring, or graph states)
- Entangling (fusion) measurements on pairs of qubits, typically from different resource states
In contrast with MBQC, FBQC does not require global cluster state preparation; instead, large-scale cluster connectivity is grown dynamically by performing fusion measurements—joint measurements of Pauli operators, commonly on two qubits from independent resource states.
Fusion measurements are generally destructive, with the measured qubits eliminated from the register. The state of the remaining qubits encodes both the effect of the measurement and any required logical byproduct operators (Pauli frame). In typical photonic implementations, dual-rail or time-bin encoding is used, and Bell-type fusions are performed using linear optics components and photon detectors.
2. Stabilizer Formalism and Fault Tolerance
The stabilizer framework underpins both the construction of the fusion network and embedded error correction:
- Each resource state is a stabilizer state (often a graph state), with generators %%%%1%%%%.
- The resource group is formed by the stabilizers of all resource states; the fusion group is generated by ideal fusion measurement operators.
- The surviving stabilizer group after fusions is the centralizer: .
- The check operator (syndrome) group is ; this redundancy ensures that syndrome information can be extracted from fusion outcomes.
Logical operators and error correction are implemented via these check operators, and the results of fusion measurements are decoded post-hoc (not requiring fast physical feedforward), rendering classical processing demands minimal at the physical level.
3. Architectural Advantages and Modular Design
FBQC architectures are built from:
- Resource-state generators (RSGs): devices repeatedly emitting small, identical resource states on a fixed clock cycle
- Fusion routers: static (or low-complexity) networks for directing qubits from RSGs to fusion devices
- Fusion devices: modules performing entangling measurements
Key simplifications:
- Every physical qubit is generated, participates in a resource state, and is immediately measured; operational depth is extremely low and error accumulation is minimized.
- Resource-state generators and fusion routers can be deployed modularly, yielding scalable hardware networks composed of standardized units.
This modularity underpins advanced architectural concepts such as “interleaving” (temporally multiplexing resource states via fiber or waveguide delays) to massively increase latency-tolerant storage and logical qubit capacity per physical module (Bombin et al., 2021).
4. Explicit Fusion Network Constructions and Thresholds
The approach is concretized with explicit fault-tolerant networks:
- 4-Star Fusion Network: Uses four-qubit GHZ (star) resource states, cell layouts assigning resources to cubic cells, and measurement of fusion outcomes such as and ; erasure threshold , Pauli error threshold .
- 6-Ring Fusion Network: Employs six-qubit ring graph states, body-centered cubic lattice arrangement; each check operator (cell) depends on 12 fusion measurements; erasure threshold , Pauli error threshold .
Encoding each physical qubit in a (2,2)-Shor CSS code and executing the fusion operation transversally improves tolerance to both loss and Pauli errors. For example, Shor encoding raises the effective failure threshold up to when measured in a “lifted” fusion error model.
5. Error Models: Hardware-Agnostic and Linear-Optical
FBQC is compatible with diverse physical error sources through explicit error models:
- Hardware-agnostic fusion error model: Each fusion measurement yields two classical outputs, each independently erased with probability , and non-erased outcomes are flipped with probability .
- Linear-optical error model: Dual-rail qubits, non-deterministic (probabilistic) fusion via unboosted or boosted Bell measurements. Booster techniques with ancillae (Bell pairs or multi-photon states) reduce failure probability (from to ), but increase susceptibility to photon loss, as each fusion operation involves more photons.
Total erasure probability per fusion outcome is given by
where is photon survival probability, and is the number of photons potentially lost per fusion. For a (2,2)-Shor–encoded fusion,
These models enable evaluation of loss and error thresholds, guiding both network and physical device design.
6. Physical Implementation and Tailoring
FBQC’s requirements closely match the capabilities of photonic hardware:
- Small resource states can be built using photonic quantum dots, heralded or deterministic photon sources, or quantum emitters operating via Lindner–Rudolph protocols.
- Linear-optical fusion gates are built from beam splitters, phase shifters, and photon-number-resolving detectors.
- Fiber or waveguide delays serve as “quantum memories,” supporting interleaving and scalable architectures.
Physical instantiations benefit significantly from the fact that every photon is detected promptly after emission (minimal storage), simplifying demands on memory coherence and synchronization. The non-determinism intrinsic to photonic fusions is directly absorbed into the logical layer via robust error correction and decoding. Notably, simulation demonstrates that with boosting and encoding, networks can tolerate per-fusion photon loss up to (with Shor encoding, even in idealized error models), exceeding thresholds of earlier linear-optical proposals (Bartolucci et al., 2021).
7. Integration of Fault Tolerance and Error Correction
FBQC integrates quantum error correction directly with the fusion network:
- Each fusion network is designed so that the measured outcomes supply a full set of check operators (syndrome information) for a topological code (typically surface code or its variants).
- Pauli frame tracking and classical decoding are performed at the logical timescale (on check outcomes), with no fast feedforward required at the hardware (physical) level.
- Architectures may flexibly adapt to different physical error landscapes; for instance, by tailoring the error-correcting code or check structure to known error correlations in photon loss and fusion outcome distributions.
Such tight integration of resource state design, measurement scheduling, and classical decoding not only simplifies hardware but allows physical error processes (e.g., loss, outcome flips, fusion failures) to be completely subsumed by the fault-tolerance protocol, yielding error thresholds that match or surpass the best values available for cluster-state or circuit-based approaches.
In sum, fusion-based quantum computing realizes universal, fault-tolerant quantum computation by decomposing all operations into fixed-size resource state generation and entangling fusion measurements. The stabilizer formalism rigorously tracks logical information and error propagation, enabling threshold analysis and robust error correction. Architectural simplicity, low operational depth, modularity, and precise alignment with photonic hardware capabilities distinguish FBQC as a leading architecture for scalable quantum information processing (Bartolucci et al., 2021).