Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarking Adversarially Robust Quantum Machine Learning at Scale (2211.12681v1)

Published 23 Nov 2022 in quant-ph, cs.ET, cs.LG, and physics.comp-ph

Abstract: Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this work, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose a novel adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our work has revealed the potential for a new kind of quantum advantage through superior robustness of ML models, whose practical realisation will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Deep learning. nature 521, 436–444 (2015).
  2. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).
  3. A survey of the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning systems 32, 604–624 (2020).
  4. Jumper, J. et al. Highly accurate protein structure prediction with alphafold. Nature 596, 583–589 (2021).
  5. Quantum circuit optimization with deep reinforcement learning. arXiv preprint arXiv:2103.07585 (2021).
  6. Biggio, B. et al. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, 387–402 (Springer, 2013).
  7. Szegedy, C. et al. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
  8. Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, AISec ’11, 43–58 (Association for Computing Machinery, New York, NY, USA, 2011). URL https://doi.org/10.1145/2046684.2046692.
  9. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
  10. Planting undetectable backdoors in machine learning models. arXiv preprint arXiv:2204.06974 (2022).
  11. Supplementary Section S2 provides a brief introduction to Adversarial Machine Learning.
  12. Fast is better than free: Revisiting adversarial training. CoRR abs/2001.03994 (2020). URL https://arxiv.org/abs/2001.03994. eprint 2001.03994.
  13. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
  14. Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018). URL https://doi.org/10.1145/3134599.
  15. Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks. Proceedings of the IEEE 108, 402–433 (2020).
  16. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 1310–1320 (PMLR, 2019).
  17. Recent advances in adversarial training for adversarial robustness. arXiv preprint arXiv:2102.01356 (2021).
  18. Supplementary Section S3 provides a brief introduction to Quantum Machine Learning.
  19. Biamonte, J. et al. Quantum machine learning. Nature 549, 195–202 (2017).
  20. Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 103, 150502 (2009). URL https://link.aps.org/doi/10.1103/PhysRevLett.103.150502.
  21. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing 46, 1920–1950 (2017).
  22. Beer, K. et al. Training deep quantum neural networks. Nature communications 11, 1–6 (2020).
  23. Havlíček, V. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019).
  24. Quantum autoencoders for efficient compression of quantum data. Quantum Science and Technology 2, 045001 (2017).
  25. Quantum generative adversarial networks. Physical Review A 98, 012324 (2018).
  26. Defence against adversarial attacks using classical and quantum-enhanced boltzmann machines. Machine Learning: Science and Technology 2, 045006 (2021).
  27. Quantum support vector machines for continuum suppression in b meson decays. Computing and Software for Big Science 5, 1–9 (2021).
  28. A kernel-based quantum random forest for improved classification. arXiv preprint arXiv:2210.02355 (2022).
  29. Jurcevic, P. et al. Demonstration of quantum volume 64 on a superconducting quantum computing system. Quantum Science and Technology 6, 025020 (2021).
  30. Challenges and opportunities in quantum machine learning. Nature Computational Science (2022).
  31. Quantum adversarial machine learning. Physical Review Research 2, 033212 (2020).
  32. Vulnerability of quantum classification to adversarial perturbations. Phys. Rev. A 101, 062331 (2020). URL https://link.aps.org/doi/10.1103/PhysRevA.101.062331.
  33. Quantum noise protects quantum classifiers against adversaries. Physical Review Research 3, 023153 (2021).
  34. Robustness verification of quantum classifiers. In International Conference on Computer Aided Verification, 151–174 (Springer, 2021).
  35. Optimal provable robustness of quantum classification via quantum hypothesis testing. npj Quantum Information 7, 1–12 (2021).
  36. Ren, W. et al. Experimental quantum adversarial learning with programmable superconducting qubits. arXiv preprint arXiv:2204.01738 (2022).
  37. Robust in practice: Adversarial attacks on quantum machine learning. Physical Review A 103, 042427 (2021).
  38. West, M. et al. Quantumising adversarial machine learning: Recent progress and future directions. In preparation (2022).
  39. T. West, M. et al. Quantamising adversarial machine learning: progress, challenges and opportunities (2022).
  40. Ilyas, A. et al. Adversarial examples are not bugs, they are features. Advances in neural information processing systems 32 (2019).
  41. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152 (2018).
  42. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
  43. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324 (1998).
  44. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
  45. Learning multiple layers of features from tiny images (2009).
  46. From facial parts responses to face detection: A deep learning approach. In Proceedings of the IEEE international conference on computer vision, 3676–3684 (2015).
  47. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, 2206–2216 (PMLR, 2020).
  48. Robust data encodings for quantum classifiers. Physical Review A 102, 032420 (2020).
  49. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, 274–283 (PMLR, 2018).
  50. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, 3–14 (2017).
  51. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017).
  52. Thermometer encoding: One hot way to resist adversarial examples (2018). URL https://openreview.net/pdf?id=S18Su--CW.
  53. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 (2017).
  54. Salman, H. et al. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems 32 (2019).
  55. Zhang, H. et al. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, 7472–7482 (PMLR, 2019).
  56. IBM Quantum Road-map: https://research.ibm.com/blog/ibm-quantum-roadmap-2025.
  57. A scalable and fast artificial neural network syndrome decoder for surface codes. arXiv preprint arXiv:2110.05854 (2021).
  58. Quantum convolutional neural networks. Nature Physics 15, 1273–1278 (2019).
  59. Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, 8024–8035 (Curran Associates, Inc., 2019). URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
  60. Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems 31 (2018).
  61. Bergholm, V. et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968 (2018).
Citations (26)

Summary

We haven't generated a summary for this paper yet.