An In-Depth Analysis of the Expressivity of Embedding Quantum Kernels
The paper "On the expressivity of embedding quantum kernels" embarks on a foundational exploration into the field of quantum machine learning, particularly focusing on the capacity and limitations of embedding quantum kernels (EQKs). It addresses the central question of whether EQKs encompass the entire spectrum of quantum kernel functions, considering efficiency constraints.
Overview
Embedding quantum kernels are a class of kernel functions derived from quantum feature maps, where classical data is mapped onto the quantum state space, and the kernel is defined via the Hilbert-Schmidt inner product of these states. This formulation is motivated by the capacity of quantum states to provide a high-dimensional feature space amenable to kernel methods. The universality of EQKs is asserted by Theorem 1 of the paper, which shows that any kernel function can be approximated as an EQK, albeit without constraints on computational resources.
Shift-Invariant Kernels and Random Fourier Features
The exploration extends to addressing whether efficient EQK approximations can be universally achieved within specific kernel classes. The paper narrows its scope to shift-invariant kernels, commonly used in classical machine learning, by leveraging the Random Fourier Features (RFF) methodology. Bochner's theorem, pivotal in the RFF method, establishes conditions under which a function is PSD and allows it to be expressed as the Fourier transform of a non-negative measure.
Corollary 1 highlights that, under conditions of smoothness (specifically bounded second derivatives), shift-invariant kernels can indeed be approximated efficiently by EQKs, with the dimensionality of the feature space scaling polynomially with relevant parameters. This result provides a constructive path for embedding-quantum efficiency in scenarios where sampling from the inverse Fourier transform of the kernel is computationally feasible.
Composition Kernels and Beyond
The discourse is broadened to include composition kernels, which generalize shift-invariant kernels by incorporating pre-processing functions into their formulation. Proposition 1 establishes that these too can be efficiently approximated as EQKs, assuming polynomial scaling on certain parameters. This construction successfully encompasses the projected quantum kernel—a recent development that connects classical shadow formalism with kernel methods in QML.
Implications and Speculation
The paper's results have noteworthy implications for the future of quantum machine learning. By confirming that two major classes of kernel functions can be efficiently realized as EQKs, the authors implicitly advocate for the expressive robustness of EQKs. These insights suggest that further exploration into novel quantum kernels might still yield useful models that incorporate structural properties advantageous for specific tasks.
Conclusions and Open Problems
While largely establishing the universality of EQKs, the work initiates a quest for kernel functions that defy efficient EQK realization. Promising future research involves investigating non-shift-invariant and indefinite kernels through the lens of HQPs' spectral properties, as well as variational principles within QML, as suggested by connections to parametrized quantum circuits.
Ultimately, this paper consolidates a theoretical backbone for embedding quantum kernels, while maintaining the allure of unexplored landscapes in quantum kernel methods. The complexity of EQKs fittingly mirrors the sophisticated nature of quantum states, suggesting that future advances in QML may very well rest upon leveraging these intricate structures effectively.