Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fourier LCU for Non-Unitary Decompositions

Updated 1 February 2026
  • The paper introduces Fourier-LCU, a framework that decomposes non-unitary operators into linear combinations of unitaries using periodic extension and Fourier sine series for exponential error convergence.
  • It converts sine series into complex-exponential forms to map operator terms onto pairs of unitaries, facilitating a block-encoding method with double-logarithmic subnormalization scaling.
  • The approach employs convex optimization for coefficient regularization, achieving a Pareto-optimal trade-off between error tolerance and resource efficiency in quantum algorithm implementations.

A Fourier Linear Combination of Unitaries (Fourier-LCU) is a general analytic method for decomposing arbitrary non-unitary operators into accurate, exponentially convergent linear combinations of unitary operators. This is accomplished via smooth periodic extension and Fourier sine series techniques, yielding a block-encoding whose subnormalization parameter exhibits double-logarithmic scaling in the target error. The framework leverages convex optimization to regularize the coefficients for specific error budgets, tracing out a Pareto front for subnormalization-versus-error. These advances constitute a versatile approach for non-unitary quantum algorithms and circuits (Brearley et al., 25 Jan 2026).

1. Periodic Extension and Fourier Sine Series Construction

To represent an operator via a LCU, the core technical step is constructing a periodic extension of the identity function f(τ)=τf(\tau) = \tau within a given interval. Fixing τ[π/η,π/η]\tau \in [-\pi/\eta, \pi/\eta] for some η>1\eta > 1, one extends f(τ)f(\tau) to [π,π][−\pi, \pi] as a 2π2\pi-periodic, odd, and infinitely differentiable function. This ensures analyticity and supports exponential Fourier coefficient decay.

The extension yields a truncated mm-term sine series approximation:

f(τ)k=1maksin(kτ)f(\tau) \approx \sum_{k=1}^{m} a_k \sin(k \tau)

The optimal coefficients aka_k are determined by a continuous least-squares problem over [π/η,π/η][−\pi/\eta, \pi/\eta]:

aLS=argminaRmπ/ηπ/η(τk=1ma~ksin(kτ))2dτa^{\rm LS} = \arg\min_{a \in \mathbb{R}^m}\int_{-\pi/\eta}^{\pi/\eta} \left( \tau - \sum_{k=1}^m \tilde{a}_k \sin(k\tau) \right)^2 d\tau

The resulting normal equations are G(η)a=b(η)G(\eta) a = b(\eta), with

Gjk=π/ηπ/ηsin(jτ)sin(kτ)dτ,bj=π/ηπ/ητsin(jτ)dτG_{jk} = \int_{-\pi/\eta}^{\pi/\eta} \sin(j\tau)\sin(k\tau)\,d\tau, \quad b_j = \int_{-\pi/\eta}^{\pi/\eta} \tau\,\sin(j\tau)\,d\tau

Since η>1\eta > 1, the coefficients aka_k exhibit exponential decay in kk, ensuring rapid convergence.

2. Complex-Exponential Formulation and Unitary Mapping

The sine series can be rewritten using the Euler identity:

sin(kx)=eikxeikx2i\sin(kx) = \frac{e^{ikx} - e^{-ikx}}{2i}

Substitution yields:

f(x)=k=1maksin(kx)=n=mmcneinxf(x) = \sum_{k=1}^m a_k \sin(kx) = \sum_{n=-m}^{m} c_n e^{inx}

where c+k=i2akc_{+k} = -\frac{i}{2} a_k, ck=+i2akc_{-k} = +\frac{i}{2} a_k, and c0=0c_0 = 0. Consequently, each sine term maps to a pair of unitaries with complex weights, forming the desired LCU structure.

3. Application to Arbitrary Non-Unitary Operators

Let AA be a general (potentially non-unitary) operator. Decompose AA into Hermitian and anti-Hermitian components:

A=H1+iH2,H1=A+A2,H2=AA2iA = H_1 + i H_2,\quad H_1 = \frac{A + A^\dagger}{2},\quad H_2 = \frac{A - A^\dagger}{2i}

The LCU approximation proceeds by:

  • Choosing τ\tau so that spec(τH1)[π/η,π/η]\text{spec}(\tau H_1) \subseteq [−\pi/\eta, \pi/\eta];
  • Approximating H1H_1 and iH2iH_2 by sine series as above;
  • Rewriting each sine term in unitary difference form.

The full LCU for AA to exponential error O(ecm)O(e^{-c m}) is:

Ak=1mak2τ[ieikτH1ie+ikτH1+e+ikτH2eikτH2]=j=14mκjUjA \approx \sum_{k=1}^m \frac{a_k}{2\tau} \left[ i e^{-i k \tau H_1} - i e^{+i k \tau H_1} + e^{+i k \tau H_2} - e^{-i k \tau H_2} \right] = \sum_{j=1}^{4m} \kappa_j U_j

Each UjU_j is one of e±ikτH1,e±ikτH2e^{\pm i k \tau H_1}, e^{\pm i k \tau H_2} with real weights κj\kappa_j given by ±ak/(2τ)\pm a_k/(2\tau) (up to phase).

4. Block-Encoding and Subnormalization Scaling

Employing standard LCU block-encoding [Childs–Wiebe 2012], one introduces na=log2(4m)n_a = \lceil \log_2(4m) \rceil ancillas, prepares amplitude state VV, applies controlled-UjU_j gates, and uncomputes via WW^\dagger:

(0naI)[W(ctrl-U)V](0naI)=A/α+O(ϵ)(\langle 0|^{\otimes n_a} \otimes I) [ W^\dagger (\mathrm{ctrl}\text{-}U) V ] (|0\rangle^{\otimes n_a} \otimes I) = A/\alpha + O(\epsilon)

with subnormalization parameter

α=j=14mκj=2τk=1mak=2ηπmax(H12,H22)k=1mak\alpha = \sum_{j=1}^{4m} | \kappa_j | = \frac{2}{\tau} \sum_{k=1}^{m} | a_k | = \frac{2\eta}{\pi} \max(\|H_1\|_2,\|H_2\|_2) \sum_{k=1}^{m} | a_k |

Since aka_k decay exponentially and empirically k=1mak=O(logm)\sum_{k=1}^{m} |a_k| = O(\log m), the total normalization satisfies:

α=O(A2loglog(1/ϵ))\alpha = O(\|A\|_2 \cdot \log \log(1/\epsilon))

This double-logarithmic scaling in 1/ϵ1/\epsilon is a substantial improvement over previous polynomial relationships between subnormalization and error.

5. Coefficient Regularization and Pareto Front Optimization

Because the sine dictionary is overcomplete for η>1\eta > 1, there exist infinitely many coefficient sets yielding nearly identical L2L^2 error yet different L1L^1 summations (impacting α\alpha). Regularization is performed via convex optimization, exploiting the trade-off:

J(a;λ)=τΦa2+λ2ηπa1J(a; \lambda) = \|\tau - \Phi a\|_2 + \lambda \frac{2\eta}{\pi} \|a\|_1

for aRma \in \mathbb{R}^m and dictionary matrix Φjk=sin(kτj)\Phi_{jk} = \sin(k\tau_j). At fixed error budget ϵ\epsilon, the L1L^1-minimization is:

α(ϵ)=min{2ηπa1:τΦa2ϵ}\alpha^*(\epsilon) = \min\left\{ \frac{2\eta}{\pi} \|a\|_1 : \|\tau - \Phi a\|_2 \leq \epsilon \right\}

Standard convex solvers or homotopy/LASSO-type path tracking yield the unique Pareto front α(ϵ)\alpha(\epsilon). It is proven that Jm(λ)J^*_m(\lambda) and αm(ϵ)\alpha^*_m(\epsilon) are nonincreasing in mm and converge to a finite limit as mm \to \infty. Numerically, “sweeping” λ\lambda down to zero identifies the lowest possible α\alpha at target ϵ\epsilon.

6. Implementation Procedures and Practical Implications

The Fourier-LCU methodology is summarized by the following stepwise procedure:

  1. Construct an analytic 2π2\pi-periodic, odd extension of f(τ)=τf(\tau) = \tau on [π/η,π/η][−\pi/\eta, \pi/\eta].
  2. Compute the exponentially convergent truncated sine series via least squares.
  3. Convert each sin(kτHi)\sin(k \tau H_i) into LCUs of e±ikτHie^{\pm i k \tau H_i} for i=1,2i = 1,2; assemble A=H1+iH2A = H_1 + i H_2 via weighted sums.
  4. Realize the decomposition as an (α,na,ϵ)(\alpha, n_a, \epsilon) block-encoding, utilizing α\alpha scaling as O(loglog(1/ϵ))O(\log \log(1/\epsilon)).
  5. Optionally, re-optimize coefficients with 1\ell_1-regularized least squares to minimize α\alpha at fixed ϵ\epsilon, thereby mapping out the Pareto front. All essential equations for aka_k, cnc_n, κj\kappa_j, and α\alpha as well as the regularization trade-offs are explicitly stated; optimized aka_k values for various mm are tabulated in Table B of the corresponding source (Brearley et al., 25 Jan 2026).

A plausible implication is that non-unitary quantum algorithms leveraging Fourier-LCU can reach error targets at far lower resource cost than via polynomial-scaling frameworks, and coefficient regularization can yield trainable sparsity for practical block-encodings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fourier Linear Combinations of Unitaries.