Papers
Topics
Authors
Recent
2000 character limit reached

Quantum Orthogonal Separable PINNs

Updated 18 November 2025
  • Quantum Orthogonal Separable PINNs are a hybrid framework integrating separable PINNs with quantum-accelerated orthogonal layers for efficient high-dimensional PDE solutions.
  • The method leverages quantum matrix multiplication to reduce computational complexity from O(N^d) to O(dN), addressing the curse of dimensionality.
  • Built-in spectral-normalized Gaussian Processes enable robust uncertainty quantification and regularize network dynamics in physics-informed learning.

Quantum Orthogonal Separable Physics-Informed Neural Networks (QO-SPINNs) represent a hybrid computational framework designed to enhance the efficiency and capability of neural network-based solvers for Partial Differential Equations (PDEs). QO-SPINNs integrate dimension-wise separability from Separable PINNs (SPINNs) with quantum-accelerated orthogonal layers, leveraging quantum matrix multiplication techniques for superior scaling in high-dimensional problems. Orthogonality induced by quantum circuits underpins robust uncertainty quantification via a spectral-normalized Gaussian Process, validated on forward and inverse PDE benchmarks (Zanotta et al., 16 Nov 2025).

1. Architectural Principles

QO-SPINNs build upon the separable structure of SPINNs to efficiently approximate solutions to dd-dimensional PDEs. For a solution u(x1,,xd)u(x_1,\ldots,x_d), QO-SPINNs factorize the neural representation into dd scalar-input subnetworks φi:RRr\varphi_i:\mathbb{R}\rightarrow\mathbb{R}^r, recombined through a rank-rr Canonical Polyadic (CP) expansion: u(x1,,xd;θ)=k=1ri=1dφi,k(xi;θi)u(x_1,\ldots,x_d;\theta) = \sum_{k=1}^r \prod_{i=1}^d \varphi_{i,k}(x_i;\theta_i) This factorization reduces collocation points from O(Nd)\mathcal O(N^d) (standard PINN) to O(dN)\mathcal O(dN), addressing the curse of dimensionality inherent to conventional PINNs. Each φi\varphi_i is instantiated as a Quantum Orthogonal Multilayer Perceptron (QO-MLP), whose weight matrices WRm×nW\in\mathbb{R}^{m\times n} are realized as orthogonal transformations (WSO(n)W\in\mathrm{SO}(n)) via quantum circuits built from Hamming weight-preserving Real Beam Splitter (RBS) gates.

A typical QO-MLP layer performs: h(l+1)=σ(W(l)h(l)+b(l))h^{(l+1)} = \sigma\big(W^{(l)}h^{(l)} + b^{(l)}\big) where W(l)h(l)W^{(l)}h^{(l)} is computed with a quantum algorithm scaling as O(dlogd/ϵ2)\mathcal O(d\log d/\epsilon^2), a substantial improvement over classical O(d2)\mathcal O(d^2) scaling.

2. Quantum Matrix Multiplication Subroutine

The quantum matrix multiplication at the core of QO-SPINNs employs a sequence of encoding, transformation, and tomography steps:

  • Unary Encoding: Input vectors hRdh\in\mathbb{R}^d are normalized (h2=1\|h\|_2=1) and represented in the unary basis as quantum states:

h=j=1dhjej|h\rangle = \sum_{j=1}^d h_j |e_j\rangle

The recursive RBS rotation angles are calculated as γ1=arccos(h1)\gamma_1=\arccos(h_1), γi=arccos(hi/j<isinγj)\gamma_i=\arccos\left(h_i/\prod_{j<i} \sin\gamma_j\right) for i=2,,di=2,\ldots,d.

  • Pyramidal RBS Circuit: Any WSO(d)W\in\mathrm{SO}(d) is realized as a triangular network of RBS gates (circuit depth O(d)\sim\mathcal O(d)), effecting the transformation:

Wh=j=1d(i=1dWj,ihi)ej|Wh\rangle = \sum_{j=1}^d \left(\sum_{i=1}^d W_{j,i} h_i\right)|e_j\rangle

  • Unary Tomography: Measurement of output amplitudes (including sign) proceeds via ancilla-controlled RBS-based tomography, requiring O(dlogd/ϵ2)\mathcal O(d\log d/\epsilon^2) shots to achieve accuracy ϵ\epsilon. The total quantum cost per layer is dominated by this step.

The enforced orthogonality (W2=1\|W\|_2=1) is fundamental for both architectural stability and the subsequent uncertainty quantification methodology.

3. Application to Physics-Informed PDE Learning

QO-SPINNs are trained to minimize physics-informed loss functionals tailored to the target PDE. For L[u](x)=f(x)\mathcal L[u](x) = f(x) with boundary operator B[]B[\cdot],

L(θ)=L[u(x;θ)]f(x)2,Ω2+λB[u(;θ)]g()2,Ω2L(\theta) = \|\mathcal L[u(x;\theta)] - f(x)\|^2_{2,\Omega} + \lambda \|B[u(\cdot;\theta)] - g(\cdot)\|^2_{2, \partial\Omega}

Derivatives L[u]\mathcal L[u] are computed via forward-mode automatic differentiation (JVP) through each QO-MLP; backpropagation updates quantum circuit parameters with a classical cost of O(d2)\mathcal O(d^2) per layer, preserving orthogonality.

Empirical validation on canonical forward and inverse PDE problems includes:

  • Advection–Diffusion (1D–3D): 2D QO-SPINN achieves MSE 1.23×102\approx 1.23\times10^{-2} (vs SPINN 2.26×101\approx 2.26\times10^{-1}); 3D QO-SPINN MSE 3.35×101\approx 3.35\times10^{-1} (vs SPINN 1.07×100\approx1.07\times10^0).
  • Burgers’ Equation (1D, ν=0.05\nu=0.05): QO-SPINN MSE 6.33×103\approx 6.33\times10^{-3}, comparable to SPINN.
  • Sine–Gordon Inverse Problem: For true β=0.25\beta=0.25, QO-SPINN infers β0.252\beta\approx0.252, SPINN 0.253\approx0.253.

4. Uncertainty Quantification via Spectral-Normalized Gaussian Processes

The inherent orthogonality of QO-MLP layers (W2=1\|W\|_2=1) enables a direct adaptation of the spectral-normalized Gaussian Process (GP) approach for principled uncertainty quantification in SPINN architectures, eliminating the typical overhead of external spectral normalization procedures.

  • Orthogonal ResNet Backbone: Each φi\varphi_i can be structured as a ResNet with RBS-induced orthogonal layers, ensuring all blocks are bi-Lipschitz by construction. This property supports robustness and well-calibrated uncertainty estimation.
  • Gaussian Process Output Layer: The fully connected output is replaced with a GP using an RBF kernel,

k(h,h)=exp(γhh2)k(h,h') = \exp(-\gamma\|h-h'\|^2)

implemented via random Fourier features for computational efficiency. The Bayesian linear model y=ϕ(h)Tβy=\phi(h)^\mathsf{T}\beta, βN(0,I)\beta\sim\mathcal N(0,I) yields predictive mean and variance after observing training data.

  • Stacked Subnet Outputs: Outputs from the dd orthogonal subnetworks are concatenated (htotalRdrh_\text{total}\in\mathbb{R}^{dr}), and passed to the spectral-normalized GP. Stacking preserves bi-Lipschitz constants, supporting end-to-end uncertainty quantification within the separable PINN paradigm.

5. Numerical Evaluation and Benchmark Metrics

Comparative benchmarks and error metrics demonstrate the advantages of QO-SPINN over classical SPINN and PINN approaches in both accuracy and efficiency.

PDE Problem QO-SPINN MSE SPINN MSE Collocation Complexity
2D Advection–Diffusion 1.23×1021.23\times10^{-2} 2.26×1012.26\times10^{-1} O(dN)\mathcal O(dN)
3D Advection–Diffusion 3.35×1013.35\times10^{-1} 1.07×1001.07\times10^0 O(dN)\mathcal O(dN)
1D Burgers’ 6.33×1036.33\times10^{-3} similar O(dN)\mathcal O(dN)

Additional key findings:

  • QO-SPINNs achieve target accuracy on standard PDEs with $25$–50%50\% fewer parameters.
  • Forward evaluation complexity is O(dlogd/ϵ2)\mathcal O(d\log d/\epsilon^2) (QO-SPINN) vs O(d2)\mathcal O(d^2) (SPINN, PINN).
  • On the 1D Burgers’ equation, the Error-Aware Coefficient (EAC) for QO-SPINN UQ reaches $0.76$, indicating strong positive correlation between predicted uncertainty and true error. Monte Carlo dropout methods, by contrast, can yield negative correlation.

6. Significance and Research Implications

By combining separable neural architectures with quantum-accelerated orthogonal linear layers, QO-SPINNs provide a computationally efficient and theoretically principled approach to physics-constrained machine learning for PDEs. The enforced spectral norm W2=1\|W\|_2=1 serves both to regularize learning dynamics and to enable a built-in method for uncertainty quantification, tailored for separable architectures. Numerical simulations confirm consistent accuracy improvements and substantial reductions in computational requirements for training and inference, particularly in high-dimensional settings. This suggests that the QO-SPINN framework could address scalability issues in scientific machine learning and accelerate the adoption of neural PDE solvers in quantum computation settings (Zanotta et al., 16 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Quantum Orthogonal Separable Physics-Informed Neural Networks (QO-SPINNs).