Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantum Machine Learning with Application to Progressive Supranuclear Palsy Network Classification (2407.06226v1)

Published 6 Jul 2024 in quant-ph and cs.LG

Abstract: Machine learning and quantum computing are being progressively explored to shed light on possible computational approaches to deal with hitherto unsolvable problems. Classical methods for machine learning are ubiquitous in pattern recognition, with support vector machines (SVMs) being a prominent technique for network classification. However, there are limitations to the successful resolution of such classification instances when the input feature space becomes large, and the successive evaluation of so-called kernel functions becomes computationally exorbitant. The use of principal component analysis (PCA) substantially minimizes the dimensionality of feature space thereby enabling computational speed-ups of supervised learning: the creation of a classifier. Further, the application of quantum-based learning to the PCA reduced input feature space might offer an exponential speedup with fewer parameters. The present learning model is evaluated on a real clinical application: the diagnosis of Progressive Supranuclear Palsy (PSP) disorder. The results suggest that quantum machine learning has led to noticeable advancement and outperforms classical frameworks. The optimized variational quantum classifier classifies the PSP dataset with 86% accuracy as compared to conventional SVM. The other technique, a quantum kernel estimator, approximates the kernel function on the quantum machine and optimizes a classical SVM. In particular, we have demonstrated the successful application of the present model on both a quantum simulator and real chips of the IBM quantum platform.

Summary

We haven't generated a summary for this paper yet.