Papers
Topics
Authors
Recent
Search
2000 character limit reached

Holographic Random Circuit Sampling

Updated 14 November 2025
  • Holographic random circuit sampling is a protocol that uses mid-circuit measurements, register re-use, and circuit depth to exponentially expand the effective Hilbert space.
  • It employs a two-register system and 2-design circuits to achieve rapid anticoncentration, providing both theoretical rigor and experimental validation.
  • This approach enables scalable quantum advantage by allowing a fixed qubit device to sample from an exponentially large distribution.

The holographic random circuit sampling algorithm is a protocol that leverages repeated mid-circuit measurements, register re-use, and circuit depth to exponentially scale the effective dimension of a quantum sampling task far beyond the native physical qubit count. Recent work establishes its theoretical foundations, rigorous anticoncentration properties, and experimental viability for demonstrating quantum advantage on pre-fault-tolerant devices (Zhang et al., 7 Nov 2025).

1. Algorithmic Structure and Protocol

The algorithm partitions the quantum processor into two registers:

  • System register AA of NAN_A physical qubits
  • Bath register BB of NBN_B physical qubits

At each of tt sequential steps:

  1. A random circuit UkU_k, typically an 8-layer hardware-efficient ansatz (approximate 2-design), acts jointly on ABA \cup B.
  2. All qubits in BB are measured in the computational basis yielding outcome zkz_k; optionally, BB can be reset to NAN_A0.
  3. After the final (NAN_A1-th) step, NAN_A2 is measured producing outcome NAN_A3.

The joint output is the “spatio-temporal” bitstring NAN_A4, living on NAN_A5 bits. While the physical device comprises only NAN_A6 qubits, repeated use and measurement of NAN_A7 at each step causes the effective Hilbert space dimension to scale as

NAN_A8

This constitutes the “holographic expansion,” wherein circuit depth NAN_A9 functions similar to additional logical qubits.

2. Theoretical Underpinnings: Collision Probability and Anticoncentration

Let BB0 denote the collision probability for outcome distribution BB1. Anticoncentration—essential for quantum advantage arguments—corresponds to BB2.

The collision probability under ensemble averaging over step circuits BB3 (2-designs), after BB4 rounds, is rigorously computed as

BB5

where BB6, BB7. Asymptotically (BB8),

BB9

with NBN_B0.

This demonstrates that even for moderate circuit depths NBN_B1, the output distribution closely approximates that of Haar-random circuits on NBN_B2 qubits. This implies that the sampling task remains exponentially anticoncentrated with respect to NBN_B3.

3. Sampling Complexity and Scaling Law

By construction,

NBN_B4

so log NBN_B5 scales linearly with both register size and circuit depth. In the regime NBN_B6, the effective sampling complexity grows exponentially in NBN_B7.

This scaling law allows physical devices with fixed qubit number to compete far past previous quantum hardware limits. For instance, with NBN_B8 and NBN_B9, a device with only 20 physical qubits samples from a 200-qubit distribution in Hilbert space.

4. Cross-Entropy Benchmarking and Noise Modeling

Fidelity between the experimental sampler tt0 and the ideal Haar-random distribution tt1 is quantified by linear cross-entropy benchmarking (XEB):

tt2

For ideal Porter–Thomas output over tt3 qubits tt4; for uniform random output tt5.

Under channel noise modeled as local depolarizing maps per step, the XEB decays approximately as

tt6

where tt7 is the depolarizing parameter. This formula captures both the per-step decay and the partial plateau due to repeated measurement and reset.

5. Experimental Realization and Empirical Results

On IBMQ Torino (27-qubit device), the protocol was implemented with

  • Step circuit: 8-layer 1D hardware-efficient ansatz (single-qubit rotations + CZ in brick-wall pattern).
  • Each step utilized mid-circuit measurement and optional reset of tt8.
  • Sampling: tt9 shots per task, averaged over 10 random circuit instances per UkU_k0.

Key benchmarks:

  • For UkU_k1, UkU_k2 (UkU_k3): UkU_k4, a 3-fold improvement over previous 83-qubit RCS.
  • For UkU_k5, two-patch protocol, UkU_k6 (UkU_k7): UkU_k8.

This constitutes experimental sampling from a UkU_k9-dimensional space using only 20 physical qubits.

Each HRCS instance involved resource counts (per step): ABA \cup B0300 SX, ABA \cup B1200 RZ, ABA \cup B260 CZ. For ABA \cup B3 steps, total gate count exceeds 20,000 two-qubit gates.

6. Rigorous Bounds, Limitations, and Open Questions

All collision probability and ABA \cup B4th-moment results derive from sequential Haar-twirling identities on a “doubled” Hilbert space, assuming step circuits implement at least approximate 2-designs. The total variation distance to true Haar sampling over ABA \cup B5 qubits is ABA \cup B6.

Algorithmic and runtime limitations include:

  • 2-design assumption: Hardware-efficient ansätze approximate but do not formally guarantee 2-design behavior.
  • Complexity-theoretic hardness: While hardness evidence parallels random circuit sampling (RCS), full #P-hardness or rigorous average-case complexity results for HRCS remain open.
  • Error accumulation: No error correction is employed; resilience derives only from circuit anticoncentration and mid-circuit measurements. Longer ABA \cup B7 eventually accumulates noise beyond error-mitigated or statistical averaging capabilities.
  • Reset fidelity and mid-circuit measurement quality are critical to XEB performance.
  • Scalability to larger ABA \cup B8 would necessitate QRAM-style error correction.

HRCS shares conceptual foundations with teleportation-inspired algorithms for classical simulation of low-depth circuits (Chen et al., 2019) and measurement-driven state generation paradigms (Zhang et al., 2024). All exploit a trade-off between spatial quantum resources (ABA \cup B9) and temporal or circuit-depth resources (BB0 or sequential rounds BB1), often referred to as holographic space-time tradeoff.

Unlike classical holographic simulation, which allows memory-efficient contraction for low-depth wide circuits, HRCS achieves quantum advantage by physically sampling exponentially large joint bitstrings (BB2) via repeated circuit application and measurement.

In holographic deep thermalization (Zhang et al., 2024), similar sequential measure–reset protocols enable Haar-random state generation with only BB3 ancilla, with rigorous decoupling guarantees and empirical frame-potential and XEB benchmarks.

8. Implications and Significance

The HRCS algorithm demonstrates that circuit depth, when exploited in the presence of mid-circuit measurement and register re-use, functions holographically as additional qubits. This enables an exponential scaling of sampling complexity relative to physical qubit resources. Verified both by exact theoretical formulas for collision probability and empirically via cross-entropy benchmarking, HRCS establishes a new route to scalable quantum advantage on near-term devices with fixed qubit count.

A plausible implication is the possibility of extending quantum supremacy demonstrations to much larger effective Hilbert spaces without hardware scaling, subject to the caveats of noise management and formal hardness proofs. The protocol synthesizes space-time resource trade-offs into a practical sampling benchmark, expanding the frontier for experimental quantum advantage.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Holographic Random Circuit Sampling Algorithm.