Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
94 tokens/sec
Gemini 2.5 Pro Premium
55 tokens/sec
GPT-5 Medium
38 tokens/sec
GPT-5 High Premium
24 tokens/sec
GPT-4o
106 tokens/sec
DeepSeek R1 via Azure Premium
98 tokens/sec
GPT OSS 120B via Groq Premium
518 tokens/sec
Kimi K2 via Groq Premium
188 tokens/sec
2000 character limit reached

Robust quantum classifier with minimal overhead (2104.08148v1)

Published 16 Apr 2021 in quant-ph

Abstract: To witness quantum advantages in practical settings, substantial efforts are required not only at the hardware level but also on theoretical research to reduce the computational cost of a given protocol. Quantum computation has the potential to significantly enhance existing classical machine learning methods, and several quantum algorithms for binary classification based on the kernel method have been proposed. These algorithms rely on estimating an expectation value, which in turn requires an expensive quantum data encoding procedure to be repeated many times. In this work, we calculate explicitly the number of repetition necessary for acquiring a fixed success probability and show that the Hadamard-test and the swap-test circuits achieve the optimal variance in terms of the quantum circuit parameters. The variance, and hence the number of repetition, can be further reduced only via optimization over data-related parameters. We also show that the kernel-based binary classification can be performed with a single-qubit measurement regardless of the number and the dimension of the data. Finally, we show that for a number of relevant noise models the classification can be performed reliably without quantum error correction. Our findings are useful for designing quantum classification experiments under limited resources, which is the common challenge in the noisy intermediate-scale quantum era.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.