Fourier LCU for Non-Unitary Decompositions
- The paper introduces Fourier-LCU, a framework that decomposes non-unitary operators into linear combinations of unitaries using periodic extension and Fourier sine series for exponential error convergence.
- It converts sine series into complex-exponential forms to map operator terms onto pairs of unitaries, facilitating a block-encoding method with double-logarithmic subnormalization scaling.
- The approach employs convex optimization for coefficient regularization, achieving a Pareto-optimal trade-off between error tolerance and resource efficiency in quantum algorithm implementations.
A Fourier Linear Combination of Unitaries (Fourier-LCU) is a general analytic method for decomposing arbitrary non-unitary operators into accurate, exponentially convergent linear combinations of unitary operators. This is accomplished via smooth periodic extension and Fourier sine series techniques, yielding a block-encoding whose subnormalization parameter exhibits double-logarithmic scaling in the target error. The framework leverages convex optimization to regularize the coefficients for specific error budgets, tracing out a Pareto front for subnormalization-versus-error. These advances constitute a versatile approach for non-unitary quantum algorithms and circuits (Brearley et al., 25 Jan 2026).
1. Periodic Extension and Fourier Sine Series Construction
To represent an operator via a LCU, the core technical step is constructing a periodic extension of the identity function within a given interval. Fixing for some , one extends to as a -periodic, odd, and infinitely differentiable function. This ensures analyticity and supports exponential Fourier coefficient decay.
The extension yields a truncated -term sine series approximation:
The optimal coefficients are determined by a continuous least-squares problem over :
The resulting normal equations are , with
Since , the coefficients exhibit exponential decay in , ensuring rapid convergence.
2. Complex-Exponential Formulation and Unitary Mapping
The sine series can be rewritten using the Euler identity:
Substitution yields:
where , , and . Consequently, each sine term maps to a pair of unitaries with complex weights, forming the desired LCU structure.
3. Application to Arbitrary Non-Unitary Operators
Let be a general (potentially non-unitary) operator. Decompose into Hermitian and anti-Hermitian components:
The LCU approximation proceeds by:
- Choosing so that ;
- Approximating and by sine series as above;
- Rewriting each sine term in unitary difference form.
The full LCU for to exponential error is:
Each is one of with real weights given by (up to phase).
4. Block-Encoding and Subnormalization Scaling
Employing standard LCU block-encoding [Childs–Wiebe 2012], one introduces ancillas, prepares amplitude state , applies controlled- gates, and uncomputes via :
with subnormalization parameter
Since decay exponentially and empirically , the total normalization satisfies:
This double-logarithmic scaling in is a substantial improvement over previous polynomial relationships between subnormalization and error.
5. Coefficient Regularization and Pareto Front Optimization
Because the sine dictionary is overcomplete for , there exist infinitely many coefficient sets yielding nearly identical error yet different summations (impacting ). Regularization is performed via convex optimization, exploiting the trade-off:
for and dictionary matrix . At fixed error budget , the -minimization is:
Standard convex solvers or homotopy/LASSO-type path tracking yield the unique Pareto front . It is proven that and are nonincreasing in and converge to a finite limit as . Numerically, “sweeping” down to zero identifies the lowest possible at target .
6. Implementation Procedures and Practical Implications
The Fourier-LCU methodology is summarized by the following stepwise procedure:
- Construct an analytic -periodic, odd extension of on .
- Compute the exponentially convergent truncated sine series via least squares.
- Convert each into LCUs of for ; assemble via weighted sums.
- Realize the decomposition as an block-encoding, utilizing scaling as .
- Optionally, re-optimize coefficients with -regularized least squares to minimize at fixed , thereby mapping out the Pareto front. All essential equations for , , , and as well as the regularization trade-offs are explicitly stated; optimized values for various are tabulated in Table B of the corresponding source (Brearley et al., 25 Jan 2026).
A plausible implication is that non-unitary quantum algorithms leveraging Fourier-LCU can reach error targets at far lower resource cost than via polynomial-scaling frameworks, and coefficient regularization can yield trainable sparsity for practical block-encodings.