Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dequantizing quantum machine learning models using tensor networks (2307.06937v2)

Published 13 Jul 2023 in quant-ph

Abstract: Ascertaining whether a classical model can efficiently replace a given quantum model -- dequantization -- is crucial in assessing the true potential of quantum algorithms. In this work, we introduced the dequantizability of the function class of variational quantum-machine-learning~(VQML) models by employing the tensor network formalism, effectively identifying every VQML model as a subclass of matrix product state (MPS) model characterized by constrained coefficient MPS and tensor product-based feature maps. From this formalism, we identify the conditions for which a VQML model's function class is dequantizable or not. Furthermore, we introduce an efficient quantum kernel-induced classical kernel which is as expressive as given any quantum kernel, hinting at a possible way to dequantize quantum kernel methods. This presents a thorough analysis of VQML models and demonstrates the versatility of our tensor-network formalism to properly distinguish VQML models according to their genuine quantum characteristics, thereby unifying classical and quantum machine-learning models within a single framework.

Citations (4)

Summary

We haven't generated a summary for this paper yet.