Quantum Kernel Machines
Last updated: June 11, 2025
Quantum Gaussian Kernels ° in Quantum Support Vector Machines ° (citing “Gaussian Kernel ° in Quantum Learning” (Bishwas et al., 2017 ° ))
1. From the classical RBF to a quantum feature map
The classical Gaussian (a.k.a. radial-basis-function, RBF °) kernel [ K_{\mathrm{RBF}}(\mathbf{x}i,\mathbf{x}_j)= \exp!\Bigl[-|\mathbf{x}_i-\mathbf{x}_j|{2}/(2\sigma{2})\Bigr] \tag{1} ] can be rewritten, via its Maclaurin expansion, as an infinite-degree polynomial kernel ° [ K{\mathrm{RBF}}(\mathbf{x}i,\mathbf{x}_j)= \sum{l=0}{\infty}\frac{\langle\mathbf{x}_i,\mathbf{x}_j\rangle{\,l}}{l!} =e{\langle\mathbf{x}_i,\mathbf{x}_j\rangle}. \tag{2} ]
“Gaussian Kernel in Quantum Learning” shows how to replicate (2) on a quantum computer:
- State preparation ° – encode each data vector in amplitude form
- Inner-product estimation – obtain with a swap test °.
- Exponential build-up – approximate by truncating the Taylor series ° at degree ; only controlled-inner-product evaluations are required.
With the obvious identification , the resulting quantum Gaussian kernel [ K{q}_{\mathrm{GK}}\bigl(|X_i\rangle,|X_j\rangle\bigr)=\exp!\bigl[\langle X_i|X_j\rangle\bigr] \tag{3} ] or, after rescaling the input width, [ K{q}_{\mathrm{GK}}=\exp!\Bigl[-|X_i-X_j|{2}/(2\sigma{2})\Bigr], \tag{4} ] is mathematically identical to (1), yet computable with quantum resources ° (Bishwas et al., 2017 ° ).
2. Runtime complexity
If all components of every are available in quantum random-access memory (QRAM °), state loading scales as [ T_{\text{prep}}=O(\log N) \qquad\text{per vector.} ]
Let be the truncation order ° chosen so that the Taylor tail . Using swap-test–based inner-product evaluation, the total gate complexity ° per kernel entry becomes [ T_{\text{quantum}}=O!\bigl(\varepsilon{-1} d \log N\bigr), \tag{5} ] where the factor comes from amplitude-estimation precision [(Bishwas et al., 2017 ° ), Sec. 3].
Classically, evaluating (2) to the same truncation order requires [ T_{\text{classical}}=O(dN) \tag{6} ] per entry. Comparing (5) and (6)
[ \boxed{T_{\text{classical}}/T_{\text{quantum}} =\Theta!\bigl(N/(\varepsilon{-1}\log N)\bigr)}, ]
i.e. an exponential speed-up in data dimension , provided QRAM is available.
3. Assembling the kernel matrix
For training examples the kernel Gram matrix ° has entries.
Step | Classical cost | Quantum cost with QRAM |
---|---|---|
Vector loading | ||
One kernel entry | ||
Full Gram matrix |
The cubic-in- term of SVM ° training () is the same in both worlds, but the bottleneck classical kernel evaluations is replaced by quantum ones, giving an exponential reduction in the feature dimension °.
4. Role of QRAM
QRAM supplies the coherent state [ \sum_{p}x_{k,p}|p\rangle \quad\text{in}\;O(\log N)\;\text{time}, ] enabling (5). Without QRAM, one must load amplitudes sequentially, forfeiting the speed-up. Hence QRAM (or another fast state-preparation scheme) is the critical hardware assumption in (Bishwas et al., 2017 ° ).
5. Implications for quantum SVMs
Replacing the classical kernel in a least-squares SVM with (3) yields a quantum LS-SVM ° whose total runtime is [ O!\bigl(M{3}+M{2}\varepsilon{-1}d\log N\bigr) \tag{7} ] versus classically [(Bishwas et al., 2017 ° ), Sec. 4]. As grows, (7) dominates by only a poly-log factor, providing asymptotic advantage for high-dimensional data.
6. Practical considerations
- Truncation order . Because the Maclaurin coefficients decrease factorially, already makes for typical inner-product magnitudes [(Bishwas et al., 2017 ° ), Fig. 3].
- Error tolerance. The amplitude-estimation overhead in (5) applies uniformly to every kernel entry; in practice one balances with shot noise ° and SVM regularisation °.
- Non-parametric expressivity. The Gaussian kernel corresponds to an infinite-dimensional feature space; its quantum realisation therefore inherits the strong universal approximation properties ° prized in classical SVMs ° while scaling exponentially better in .
- Hardware roadmap. The algorithm requires: QRAM of size , swap-test circuits of depth poly, and coherent repetition times. These ingredients make the quantum Gaussian kernel a realistic target for early fault-tolerant machines.
7. Conclusion
The work of Prasanna Venkatesh et al. (Bishwas et al., 2017 ° ) provides the first end-to-end blueprint for quantum-accelerated Gaussian kernels, showing:
- a faithful reconstruction of the ubiquitous RBF kernel ° inside a quantum computer,
- exponential improvement in data-dimension scaling when QRAM is available,
- seamless integration ° into LS-SVM training.
This establishes Gaussian-kernel quantum SVMs as a promising pathway to demonstrable quantum advantage ° on high-dimensional, nonlinear learning problems once scalable QRAM and low-depth swap-test primitives become experimentally accessible.