Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
11 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
34 tokens/sec
2000 character limit reached

QAHAN: A Quantum Annealing Hard Attention Network (2412.20930v1)

Published 30 Dec 2024 in quant-ph

Abstract: Hard Attention Mechanisms (HAMs) effectively filter essential information discretely and significantly boost the performance of machine learning models on large datasets. Nevertheless, they confront the challenge of non-differentiability, which raises the risk of convergence to a local optimum. Quantum Annealing (QA) is expected to solve the above dilemma. We propose a Quantum Annealing Hard Attention Mechanism (QAHAM) for faster convergence to the global optimum without the need to compute gradients by exploiting the quantum tunneling effect. Based on the above theory, we construct a Quantum Annealing Hard Attention Network (QAHAN) on D-Wave and Pytorch platforms for MNIST and CIFAR-10 multi-classification. Experimental results indicate that the QAHAN converges faster, exhibits smoother accuracy and loss curves, and demonstrates superior noise robustness compared to two traditional HAMs. Predictably, our scheme accelerates the convergence between the fields of quantum algorithms and machine learning, while advancing the field of quantum machine vision.

Summary

We haven't generated a summary for this paper yet.