Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sampling-Based Linear Combination of Unitaries

Updated 24 January 2026
  • Sampling-based linear combination of unitaries is a method that expresses nonunitary operators as weighted sums of unitaries, enabling simulation of quantum processes.
  • The approach employs randomized, hybrid, and classical post-processing techniques to balance quantum circuit depth with classical sampling overhead.
  • Recent advances highlight its efficacy in ground-state preparation, spectral filtering, and observable estimation, proving valuable for NISQ and fault-tolerant regimes.

Sampling-based linear combination of unitaries (LCU) algorithms provide a set of NISQ- and fault-tolerance–compatible techniques for realizing non-trivial (often non-unitary) operators as expectation values by leveraging classical sampling and minimal quantum-coherent resources. These frameworks enable the simulation of nonunitary processes, spectral filtering, dynamical observables, and quantum measurement protocols by expanding the target operator as a (possibly integral) sum where each term is proportional to a unitary, estimated by simple quantum circuits, and aggregated via classical post-processing. Recent advances systematically clarify the trade-offs between quantum hardware requirements and classical sampling overhead, and demonstrate regimes where sampling-based (randomized or hybrid) LCU schemes are optimal or nearly so for ground-state preparation, quantum linear system solvers, non-Hermitian dynamics, error detection, and observable estimation.

1. Operator Representation in Sampling-based LCU

A general target operator can be expressed as a linear (or integral) combination of unitaries,

F(A)=Vf(t)G(A,t)dtF(A) = \int_V f(\mathbf{t})\, G(A,\mathbf{t})\, d\mathbf{t}

or, in the discrete case, as

K=iciUiK = \sum_i c_i U_i

where f(t)f(\mathbf{t}) is a probability density over a dd-dimensional domain VRdV\subset\mathbb{R}^d and G(A,t)=c(t)U(A,t)G(A,\mathbf{t}) = c(\mathbf{t}) U(A,\mathbf{t}) with U(A,t)=1\|U(A,\mathbf{t})\|=1, or ci>0c_i > 0. This representation supports the realization of filtered projectors, Green's function inverses, and general nonunitary processes after suitable truncation and normalization (Kawamata et al., 17 Sep 2025).

When targeting state transformations or CP maps on density operators, the induced channel is

Λ(ρ)=KρK\Lambda(\rho) = K \rho K^\dagger

with normalization given by c1=ici\|c\|_1 = \sum_i |c_i| and probabilistic weights pi=ci/c1p_i = c_i/\|c\|_1 (Wada et al., 6 Dec 2025). In many algorithmic settings, composite LCUs (products of LCUs associated to time steps or filter segments) appear, denoted A(ν)=k=1νAkA^{(\nu)} = \prod_{k=1}^\nu A_k (Sun et al., 18 Jun 2025).

2. Sampling-based LCU Protocols: Randomized, Hybrid, and Classical Post-Processing

Sampling-based implementations circumvent the need for a large coherent ancilla and complex multi-controlled unitaries by randomizing the selection of LCU terms and estimating the expectation values via quantum measurements. The main protocols include:

  • Randomized (Virtual) LCU: Sample index pairs (i,j)(i, j) with probability pipjp_ip_j and estimate Retr[OUiρUj]\operatorname{Re}\operatorname{tr}[O U_i \rho U_j^\dagger] using a Hadamard-test-style circuit with a single ancilla. This approach computes the same expectation as the original coherent LCU algorithm but with quadratic sampling overhead: the sample complexity for error ϵ\epsilon scales as O(1/(P2ϵ2))O(1/(P^2\epsilon^2)), where PP is the postselection probability in the coherent approach (Wada et al., 6 Dec 2025, Sun et al., 18 Jun 2025).
  • Hybrid Grouping Strategy: Interpolate between fully coherent and randomized LCU by partitioning the mm-term sum into GG groups SkS_k. Each group forms a smaller coherent LCU, and sampling proceeds on a coarse-grained index set. The key resource parameter is the “reduction factor” RR, which determines sampling overhead: RR strictly decreases as group sizes increase, interpolating between R=1R=1 (randomized LCU) and R=PR=P (fully coherent LCU) (Wada et al., 6 Dec 2025).
  • Classical Post-Processing (LCU-CPP): For continuous or large discrete LCUs, one samples a set {tk}k=1K\{\mathbf{t}_k\}_{k=1}^K (using e.g., Monte Carlo, trapezoid, or quasi-Monte Carlo sequences) and estimates ReTr(G(A,tk)ρ)\operatorname{Re}\operatorname{Tr}(G(A,\mathbf{t}_k)\rho) at each point. These values are aggregated classically to form the final estimator of Tr[F(A)ρ]\operatorname{Tr}[F(A)\rho]. No complex ancilla operations or block encodings are involved (Kawamata et al., 17 Sep 2025).

3. Quasi-Monte Carlo and Classical Integration in LCU-CPP Frameworks

Quasi-Monte Carlo (QMC) methods provide deterministic low–discrepancy sampling sequences (e.g., Halton, Sobol) to accelerate convergence over standard Monte Carlo. QMC replaces i.i.d. random samples by a sequence {xk}k=1K\{\mathbf{x}_k\}_{k=1}^K on [0,1]d[0,1]^d, transformed into tk\mathbf{t}_k via the inverse CDF of ff. By the Koksma–Hlawka inequality,

Vf(t)h(t)dt1Kk=1Kh(tk)VHK(h)D({tk})\left|\int_V f(\mathbf{t}) h(\mathbf{t}) d\mathbf{t} - \frac{1}{K}\sum_{k=1}^K h(\mathbf{t}_k)\right| \leq V_{\rm HK}(h) D^*(\{\mathbf{t}_k\})

yielding bias scaling as O((lnK)d/K)O((\ln K)^d/K), superior to MC (O(K1/2)O(K^{-1/2})) and the trapezoid rule (O(K2/d)O(K^{-2/d})) (Kawamata et al., 17 Sep 2025). In LCU-CPP, QMC achieves the lowest end-to-end error for moderate shot counts M=102M=10^210310^3 and minimal circuit depth, which is significant for NISQ-era implementations.

At each QMC point, the Hadamard test is performed MM times to estimate νk\nu_k, and the samples are averaged; statistical shot-noise decreases as O((KM)1/2)O((KM)^{-1/2}). Practical benchmarks (ground-state and Green's function estimation on Heisenberg chains) demonstrate lowest errors for QMC across all relevant KK and MM (Kawamata et al., 17 Sep 2025).

4. Circuit Structures, Resource Scaling, and Algorithmic Trade-offs

Resource requirements are determined by LCU protocol choice and grouping:

Method Ancilla Qubits Quantum Circuit Cost Sampling Overhead
Coherent LCU O(logm)O(\log m) O(mlogm)O(m\log m), mm controlled-UiU_i O(P1ϵ2)O(P^{-1}\epsilon^{-2})
Randomized LCU $1$ O(1)O(1), two controlled-UiU_i/shot O(P2ϵ2)O(P^{-2}\epsilon^{-2})
Hybrid LCU O(maxklogSk)O(\max_k \log |S_k|) O(maxkSklogSk)O(\max_k |S_k|\log |S_k|) O(R/(P2ϵ2))O(R/(P^2\epsilon^2))
LCU-CPP (QMC) $1$ controlled-GG + $2$ H-gates/shot O((KM)1/2)O((KM)^{-1/2}); bias \sim QMC

For LCU-CPP and randomized LCU, each run requires only a single ancilla qubit, and circuit depth effectively reduces to the cost of implementing G(A,t)G(A, \mathbf{t}) or UiU_i, plus two Hadamard gates. Coherent LCU demands logarithmic ancilla scaling (in mm), high gate count, and exponential cost for large mm. Hybrid algorithms achieve intermediate hardware and sampling cost by adjusting group size Sk|S_k| (Wada et al., 6 Dec 2025).

Sampling overhead dominates for randomized LCU when PP is small, i.e., for nonselective target operators or low-probability subspaces. Hybrid grouping reduces this penalty by gathering high-weight terms into coherent clusters, lowering RR toward PP (Wada et al., 6 Dec 2025).

5. Observable Estimation and Shadow Tomography Integration

Estimating expectation values Tr[OKρK]\operatorname{Tr}[O K \rho K^\dagger] in sampling-based LCU can be performed directly for a single observable through repeated runs. For simultaneous estimation of many observables, the connection between randomized LCU and classical shadow tomography is exploited. By instrumenting each shot with a random Clifford rotation and computational basis measurement, the resulting classical shadows encode observables' values; an unbiased estimator for UρVU \rho V^\dagger is formed by

v^a,b(R,z)=2ib(1)aσ^a,b(R,z)\hat v_{a,b}(R,z) = 2 i^b (-1)^a\, \hat \sigma_{a,b}(R,z)

where a,ba, b are ancilla measurement outcomes and R,zR, z label the classical shadow (Sun et al., 18 Jun 2025).

For MM observables, the shot complexity is O(ϵ2maxmOmshadow2log(M/δ))O(\epsilon^{-2}\max_m \|O_m\|_{\rm shadow}^2 \log(M/\delta)), leveraging shadow norm bounds to ensure efficient estimation (Sun et al., 18 Jun 2025). This framework enables concurrent multi-observable estimation in quantum simulation and phase estimation routines, at sample cost competitive with the best postselected and randomized approaches.

6. Applications and Case Studies

Sampling-based LCU schemes have been instantiated in a diverse range of quantum algorithms:

  • High-precision Hamiltonian simulation: Decomposition of Trotter steps into Pauli LCUs, with randomized or grouped samplings yielding sample complexities of O(ϵ2Oshadow2log(M/δ))O(\epsilon^{-2}\|O\|_{\rm shadow}^2\log(M/\delta)), and depth per shot scaling as O(T1+1/(4k)log(1/ϵ))O(T^{1+1/(4k)}\log(1/\epsilon)) for kkth-order methods (Sun et al., 18 Jun 2025).
  • Ground-state preparation and spectral filtering: Gaussian-filtered projectors realized by QMC-integrated LCU-CPP or by hybrid virtual+coherent schemes that balance ancilla usage and sampling cost, achieving total time complexity O(p01Δ1ϵ2log(1/ϵ))O(p_0^{-1}\Delta^{-1}\epsilon^{-2}\log(1/\epsilon)) while minimizing circuit resources (Kawamata et al., 17 Sep 2025, Wada et al., 6 Dec 2025).
  • Quantum linear system solvers: Discretized Fourier-integral LCUs over O(JK)O(JK) terms, with coherent, randomized, and hybrid groupings yielding respective scalings in ancilla count and sample overhead. Hybrid approach recovers the best scaling of both extremes while minimizing hardware needs (Wada et al., 6 Dec 2025).
  • Non-Hermitian and open-system dynamics: Dynamical maps expanded as integrals or sums over parameterized unitaries, sampled according to ff, with grouping techniques (large cluster/coherent for dominant regions, singleton/virtual for tails) enabling substantial circuit depth reduction at negligible sampling cost increase (Wada et al., 6 Dec 2025).
  • Quantum error detection and syndrome extraction: Projectors onto error-free subspaces represented as uniform LCUs over stabilizers, allowing randomized sampling (virtual detection), coherent detection, or hybrid strategies targeting biased error models. Resource trade-offs are custom-fitted to code structure and noise (Wada et al., 6 Dec 2025).

7. Caveats, Regimes, and Hardware Relevance

Sampling-based LCU methods are not universally optimal: for large MM (shots per QMC point) or low dd (integration dimensions), trapezoidal and Simpson rules can outperform QMC. When M=1M=1, shot-noise dominates, and MC and QMC are both better than deterministic grid-based rules. The hybrid grouping strategy enables near-optimal trade-offs only when one can partition the operator expansion sensibly; poor groupings incur unnecessary sampling or circuit costs (Kawamata et al., 17 Sep 2025, Wada et al., 6 Dec 2025).

Randomized and hybrid LCU approaches are especially valuable in NISQ and early fault-tolerant regimes due to their minimal circuit depth and flexible ancilla requirements; however, the inherent variance and sometimes large sample overhead (scaling as P2P^{-2}) remain their main limitations. All estimators for “virtual” effective states UρVU\rho V^\dagger are classical constructs—these states are not physically prepared but must be estimated via carefully designed quantum instruments and classical post-processing (Sun et al., 18 Jun 2025, Wada et al., 6 Dec 2025).

Sampling-based LCU, and in particular QMC-LCU-CPP, delivers practical, provably faster convergence for nonunitary operator estimation, avoids large high-dimensional prefactors, and achieves minimal end-to-end error for practical shot counts at the lowest possible circuit depth for quantum hardware (Kawamata et al., 17 Sep 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sampling-Based Linear Combination of Unitaries.