Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Restricting to the chip architecture maintains the quantum neural network accuracy (2212.14426v2)

Published 29 Dec 2022 in quant-ph, cs.AI, and cs.LG

Abstract: In the era of noisy intermediate-scale quantum devices, variational quantum algorithms (VQAs) stand as a prominent strategy for constructing quantum machine learning models. These models comprise both a quantum and a classical component. The quantum facet is characterized by a parametrization $U$, typically derived from the composition of various quantum gates. On the other hand, the classical component involves an optimizer that adjusts the parameters of $U$ to minimize a cost function $C$. Despite the extensive applications of VQAs, several critical questions persist, such as determining the optimal gate sequence, devising efficient parameter optimization strategies, selecting appropriate cost functions, and understanding the influence of quantum chip architectures on the final results. This article aims to address the last question, emphasizing that, in general, the cost function tends to converge towards an average value as the utilized parameterization approaches a $2$-design. Consequently, when the parameterization closely aligns with a $2$-design, the quantum neural network model's outcome becomes less dependent on the specific parametrization. This insight leads to the possibility of leveraging the inherent architecture of quantum chips to define the parametrization for VQAs. By doing so, the need for additional swap gates is mitigated, consequently reducing the depth of VQAs and minimizing associated errors.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com