Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 93 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 15 tok/s
GPT-5 High 20 tok/s Pro
GPT-4o 98 tok/s
GPT OSS 120B 460 tok/s Pro
Kimi K2 217 tok/s Pro
2000 character limit reached

Better than classical? The subtle art of benchmarking quantum machine learning models (2403.07059v2)

Published 11 Mar 2024 in quant-ph and cs.LG

Abstract: Benchmarking models via classical simulations is one of the main ways to judge ideas in quantum machine learning before noise-free hardware is available. However, the huge impact of the experimental design on the results, the small scales within reach today, as well as narratives influenced by the commercialisation of quantum technologies make it difficult to gain robust insights. To facilitate better decision-making we develop an open-source package based on the PennyLane software framework and use it to conduct a large-scale study that systematically tests 12 popular quantum machine learning models on 6 binary classification tasks used to create 160 individual datasets. We find that overall, out-of-the-box classical machine learning models outperform the quantum classifiers. Moreover, removing entanglement from a quantum model often results in as good or better performance, suggesting that "quantumness" may not be the crucial ingredient for the small learning tasks considered here. Our benchmarks also unlock investigations beyond simplistic leaderboard comparisons, and we identify five important questions for quantum model design that follow from our results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. M. Schuld and F. Petruccione, Machine learning with quantum computers (Springer, 2021).
  2. K. Ito and K. Fujii, Santaqlaus: A resource-efficient method to leverage quantum shot-noise for optimization of variational quantum algorithms, arXiv preprint arXiv:2312.15791  (2023).
  3. M. T. West, M. Sevior, and M. Usman, Reflection equivariant quantum neural networks for enhanced image classification, Machine Learning: Science and Technology 4, 035027 (2023).
  4. F. J. Schreiber, J. Eisert, and J. J. Meyer, Classical surrogates for quantum learning models, Physical Review Letters 131, 100803 (2023).
  5. E. Farhi and H. Neven, Classification with quantum neural networks on near term processors, arXiv preprint arXiv:1802.06002  (2018).
  6. I. Cong, S. Choi, and M. D. Lukin, Quantum convolutional neural networks, Nature Physics 15, 1273 (2019).
  7. M. Schuld and N. Killoran, Quantum machine learning in feature hilbert spaces, Physical review letters 122, 040504 (2019).
  8. J. Kübler, S. Buchholz, and B. Schölkopf, The inductive bias of quantum kernels, Advances in Neural Information Processing Systems 34, 12661 (2021).
  9. K. Ethayarajh and D. Jurafsky, Utility is in the eye of the user: A critique of nlp leaderboards, arXiv preprint arXiv:2009.13888  (2020).
  10. D. H. Wolpert, The lack of a priori distinctions between learning algorithms, Neural computation 8, 1341 (1996).
  11. C. G. Northcutt, A. Athalye, and J. Mueller, Pervasive label errors in test sets destabilize machine learning benchmarks, arXiv preprint arXiv:2103.14749  (2021).
  12. R. Dotan and S. Milli, Value-laden disciplinary shifts in machine learning, arXiv preprint arXiv:1912.01172  (2019).
  13. C. Riquelme, G. Tucker, and J. Snoek, Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling, arXiv preprint arXiv:1802.09127  (2018).
  14. Y. LeCun, The mnist database of handwritten digits, http://yann.lecun.com/exdb/mnist/  (1998).
  15. G. Hinton, The forward-forward algorithm: Some preliminary investigations, arXiv preprint arXiv:2212.13345  (2022).
  16. J. Bausch, Recurrent quantum neural networks, Advances in neural information processing systems 33, 1368 (2020).
  17. S. Greydanus, Scaling down deep learning, arXiv preprint arXiv:2011.14439  (2020).
  18. C. Zoufal, A. Lucchi, and S. Woerner, Variational quantum boltzmann machines, Quantum Machine Intelligence 3, 1 (2021).
  19. I. Steinwart and A. Christmann, Support vector machines (Springer Science & Business Media, 2008).
  20. T. Hofmann, B. Schölkopf, and A. J. Smola, Kernel methods in machine learning (2008).
  21. M. Schuld, Supervised quantum machine learning models are kernel methods, arXiv preprint arXiv:2101.11020  (2021).
  22. K. Fukushima, Neocognitron: A hierarchical neural network capable of visual pattern recognition, Neural networks 1, 119 (1988).
  23. T. S. Cohen, M. Geiger, and M. Weiler, A general theory of equivariant cnns on homogeneous spaces, Advances in neural information processing systems 32 (2019).
  24. Flax software package.
  25. Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE transactions on pattern analysis and machine intelligence 35, 1798 (2013).
  26. H. Narayanan and S. Mitter, Sample complexity of testing the manifold hypothesis, Advances in neural information processing systems 23 (2010).
  27. F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain., Psychological review 65, 386 (1958).
  28. M. Minsky and S. A. Papert, Perceptrons, reissue of the 1988 expanded edition with a new foreword by Léon Bottou: an introduction to computational geometry (MIT press, 2017).
  29. S. Buchanan, D. Gilboa, and J. Wright, Deep networks and the multiple manifold problem, in International Conference on Learning Representations (2021).
  30. S. Guan and M. Loew, A novel intrinsic measure of data separability, Applied Intelligence 52, 17734 (2022).
  31. M. R. Smith, T. Martinez, and C. Giraud-Carrier, An instance level analysis of data complexity, Machine learning 95, 225 (2014).
  32. J. M. Sotoca, J. S. Sánchez, and R. A. Mollineda, A review of data complexity measures and their applicability to pattern classification problems, Actas del III Taller Nacional de Mineria de Datos y Aprendizaje. TAMIDA , 77 (2005).
  33. T. K. Ho and M. Basu, Complexity measures of supervised classification problems, IEEE transactions on pattern analysis and machine intelligence 24, 289 (2002).
  34. S. Ahmed, N. Killoran, and J. F. C. Álvarez, Implicit differentiation of variational quantum algorithms (2022), arXiv:2211.13765 [quant-ph] .
  35. M. Schuld, R. Sweke, and J. J. Meyer, Effect of data encoding on the expressive power of variational quantum-machine-learning models, Physical Review A 103, 032430 (2021).
  36. D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980  (2014).
Citations (43)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper systematically compares 12 quantum machine learning models with classical classifiers across various binary classification tasks.
  • The paper employs extensive hyperparameter tuning to expose significant performance variances and the impact of trainable data encodings.
  • The paper finds that classical models frequently outperform quantum models on small-scale datasets, questioning the current advantage of QML.

Better than classical? Benchmarking Quantum Machine Learning Models

In the rapidly evolving field of quantum machine learning (QML), it's essential to rigorously evaluate the performance of quantum models against classical counterparts to determine the true potential and limitations of quantum computing for machine learning tasks. Our large-scale benchmark paper, drawing on the open-source PennyLane framework, systematically compares a range of popular QML models against classical machine learning models across various binary classification tasks.

Model and Data Selection

Our investigation focuses on 12 quantum machine learning models, categorized into quantum neural networks (QNNs), quantum kernel methods, and quantum convolutional neural networks (QCNNs), and compares their performance with that of prototypical classical models such as Support Vector Machines, standard Neural Networks, and Convolutional Neural Networks. The choice of these models is based on a selection methodology aimed at identifying influential ideas in QML implementable on standard simulators.

The paper employs an array of benchmark tasks created from both synthesised and real-world datasets (specifically a downsized version of the MNIST dataset), aiming to cover a diverse set of structural properties and learning difficulties. These tasks include classifying linearly separable data, differentiating between images of bars and stripes, and classifying inputs based on whether they fall within a certain distance from a set of hyperplanes, among others. The primary goal behind these diverse benchmarks is to test the models under a wide variety of conditions.

Hyperparameter Tuning and Benchmarking Procedure

Extensive hyperparameter searches were conducted to ensure that each model's performance is accurately reflected, avoiding the pitfalls of under-tuning or over-tuning that could skew the results. The paper highlighted the vast performance variance across different hyperparameter configurations, emphasizing the critical role of careful hyperparameter selection.

Key Findings

Our comprehensive benchmarks indicate that, in most cases, simple out-of-the-box classical models outperformed quantum models on the small-scale datasets tested. This finding remains consistent across the range of tasks considered, suggesting that, at least for these datasets, the unique capabilities of quantum computing have yet to manifest in a clear advantage for QML models.

One notable aspect of the benchmarks is the performance of models on linearly separable datasets, where quantum models struggled significantly. Surprisingly, removing entanglement from quantum models often did not degrade performance, raising questions about the role of quantumness in these particular machine learning tasks.

Beyond the performance rankings, the benchmarks unveiled intriguing insights into the design of quantum models. For instance, the paper dissected the factors contributing to the relative success of models employing data reuploading techniques, highlighting the importance of trainable data encodings and frequency spectrum rescaling.

Implications and Future Directions

The findings from our paper present a nuanced view of the current state of quantum machine learning. While quantum models may not yet outperform classical models on small datasets, our benchmarks have unearthed valuable insights into model design principles and identified promising directions for future research.

Focusing on understanding the datasets that could genuinely benefit from quantum computing, refining hybrid quantum-classical architectures, and exploring the boundaries of "quantumness" necessary for quantum advantage are areas ripe for further investigation. Additionally, the paper underscores the need for more efficient and scalable quantum simulation software to support rigorous and comprehensive benchmarking efforts.

In conclusion, while the quest for quantum advantage in machine learning continues, benchmarking studies like ours are invaluable for directing research efforts, refining quantum algorithms, and ultimately unlocking the full potential of quantum computing for machine learning.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube