Quantum Orthogonal Separable PINNs
- Quantum Orthogonal Separable PINNs are a hybrid framework integrating separable PINNs with quantum-accelerated orthogonal layers for efficient high-dimensional PDE solutions.
- The method leverages quantum matrix multiplication to reduce computational complexity from O(N^d) to O(dN), addressing the curse of dimensionality.
- Built-in spectral-normalized Gaussian Processes enable robust uncertainty quantification and regularize network dynamics in physics-informed learning.
Quantum Orthogonal Separable Physics-Informed Neural Networks (QO-SPINNs) represent a hybrid computational framework designed to enhance the efficiency and capability of neural network-based solvers for Partial Differential Equations (PDEs). QO-SPINNs integrate dimension-wise separability from Separable PINNs (SPINNs) with quantum-accelerated orthogonal layers, leveraging quantum matrix multiplication techniques for superior scaling in high-dimensional problems. Orthogonality induced by quantum circuits underpins robust uncertainty quantification via a spectral-normalized Gaussian Process, validated on forward and inverse PDE benchmarks (Zanotta et al., 16 Nov 2025).
1. Architectural Principles
QO-SPINNs build upon the separable structure of SPINNs to efficiently approximate solutions to -dimensional PDEs. For a solution , QO-SPINNs factorize the neural representation into scalar-input subnetworks , recombined through a rank- Canonical Polyadic (CP) expansion: This factorization reduces collocation points from (standard PINN) to , addressing the curse of dimensionality inherent to conventional PINNs. Each is instantiated as a Quantum Orthogonal Multilayer Perceptron (QO-MLP), whose weight matrices are realized as orthogonal transformations () via quantum circuits built from Hamming weight-preserving Real Beam Splitter (RBS) gates.
A typical QO-MLP layer performs: where is computed with a quantum algorithm scaling as , a substantial improvement over classical scaling.
2. Quantum Matrix Multiplication Subroutine
The quantum matrix multiplication at the core of QO-SPINNs employs a sequence of encoding, transformation, and tomography steps:
- Unary Encoding: Input vectors are normalized () and represented in the unary basis as quantum states:
The recursive RBS rotation angles are calculated as , for .
- Pyramidal RBS Circuit: Any is realized as a triangular network of RBS gates (circuit depth ), effecting the transformation:
- Unary Tomography: Measurement of output amplitudes (including sign) proceeds via ancilla-controlled RBS-based tomography, requiring shots to achieve accuracy . The total quantum cost per layer is dominated by this step.
The enforced orthogonality () is fundamental for both architectural stability and the subsequent uncertainty quantification methodology.
3. Application to Physics-Informed PDE Learning
QO-SPINNs are trained to minimize physics-informed loss functionals tailored to the target PDE. For with boundary operator ,
Derivatives are computed via forward-mode automatic differentiation (JVP) through each QO-MLP; backpropagation updates quantum circuit parameters with a classical cost of per layer, preserving orthogonality.
Empirical validation on canonical forward and inverse PDE problems includes:
- Advection–Diffusion (1D–3D): 2D QO-SPINN achieves MSE (vs SPINN ); 3D QO-SPINN MSE (vs SPINN ).
- Burgers’ Equation (1D, ): QO-SPINN MSE , comparable to SPINN.
- Sine–Gordon Inverse Problem: For true , QO-SPINN infers , SPINN .
4. Uncertainty Quantification via Spectral-Normalized Gaussian Processes
The inherent orthogonality of QO-MLP layers () enables a direct adaptation of the spectral-normalized Gaussian Process (GP) approach for principled uncertainty quantification in SPINN architectures, eliminating the typical overhead of external spectral normalization procedures.
- Orthogonal ResNet Backbone: Each can be structured as a ResNet with RBS-induced orthogonal layers, ensuring all blocks are bi-Lipschitz by construction. This property supports robustness and well-calibrated uncertainty estimation.
- Gaussian Process Output Layer: The fully connected output is replaced with a GP using an RBF kernel,
implemented via random Fourier features for computational efficiency. The Bayesian linear model , yields predictive mean and variance after observing training data.
- Stacked Subnet Outputs: Outputs from the orthogonal subnetworks are concatenated (), and passed to the spectral-normalized GP. Stacking preserves bi-Lipschitz constants, supporting end-to-end uncertainty quantification within the separable PINN paradigm.
5. Numerical Evaluation and Benchmark Metrics
Comparative benchmarks and error metrics demonstrate the advantages of QO-SPINN over classical SPINN and PINN approaches in both accuracy and efficiency.
| PDE Problem | QO-SPINN MSE | SPINN MSE | Collocation Complexity |
|---|---|---|---|
| 2D Advection–Diffusion | |||
| 3D Advection–Diffusion | |||
| 1D Burgers’ | similar |
Additional key findings:
- QO-SPINNs achieve target accuracy on standard PDEs with $25$– fewer parameters.
- Forward evaluation complexity is (QO-SPINN) vs (SPINN, PINN).
- On the 1D Burgers’ equation, the Error-Aware Coefficient (EAC) for QO-SPINN UQ reaches $0.76$, indicating strong positive correlation between predicted uncertainty and true error. Monte Carlo dropout methods, by contrast, can yield negative correlation.
6. Significance and Research Implications
By combining separable neural architectures with quantum-accelerated orthogonal linear layers, QO-SPINNs provide a computationally efficient and theoretically principled approach to physics-constrained machine learning for PDEs. The enforced spectral norm serves both to regularize learning dynamics and to enable a built-in method for uncertainty quantification, tailored for separable architectures. Numerical simulations confirm consistent accuracy improvements and substantial reductions in computational requirements for training and inference, particularly in high-dimensional settings. This suggests that the QO-SPINN framework could address scalability issues in scientific machine learning and accelerate the adoption of neural PDE solvers in quantum computation settings (Zanotta et al., 16 Nov 2025).