Towards quantum enhanced adversarial robustness in machine learning (2306.12688v1)
Abstract: Machine learning algorithms are powerful tools for data driven tasks such as image classification and feature detection, however their vulnerability to adversarial examples - input samples manipulated to fool the algorithm - remains a serious challenge. The integration of machine learning with quantum computing has the potential to yield tools offering not only better accuracy and computational efficiency, but also superior robustness against adversarial attacks. Indeed, recent work has employed quantum mechanical phenomena to defend against adversarial attacks, spurring the rapid development of the field of quantum adversarial machine learning (QAML) and potentially yielding a new source of quantum advantage. Despite promising early results, there remain challenges towards building robust real-world QAML tools. In this review we discuss recent progress in QAML and identify key challenges. We also suggest future research directions which could determine the route to practicality for QAML approaches as quantum computing hardware scales up and noise levels are reduced.
- Deep Learning. Nature 521, 436–444 (2015).
- Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 387–402 (Springer, 2013).
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
- Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, AISec ’11, 43–58 (Association for Computing Machinery, New York, NY, USA, 2011).
- Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
- One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23, 828–841 (2019).
- Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, 274–283 (PMLR, 2018).
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
- Adversarial examples in the physical world. In Artificial Intelligence Safety and Security, 99–112 (Chapman and Hall/CRC, 2018).
- Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1625–1634 (2018).
- Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 3–14 (2017).
- Fast is better than free: Revisiting adversarial training. CoRR abs/2001.03994 (2020). eprint 2001.03994.
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
- Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018).
- Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks. Proceedings of the IEEE 108, 402–433 (2020).
- Adversarial examples are not bugs, they are features. Advances in Neural Information Processing Systems 32 (2019).
- Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, 1528–1540 (Association for Computing Machinery, New York, NY, USA, 2016).
- Quantum machine learning. Nature 549, 195-202 (2017).
- Quantum adversarial machine learning. Physical Review Research 2, 033212 (2020).
- Vulnerability of quantum classification to adversarial perturbations. Phys. Rev. A 101, 062331 (2020).
- Quantum noise protects quantum classifiers against adversaries. Physical Review Research 3, 023153 (2021).
- Robustness verification of quantum classifiers. In International Conference on Computer Aided Verification, 151–174 (Springer, 2021).
- Optimal provable robustness of quantum classification via quantum hypothesis testing. npj Quantum Information 7, 1–12 (2021).
- Ren, W. et al. Experimental quantum adversarial learning with programmable superconducting qubits. Nature Computational Science 2, 711 (2022).
- Robust in practice: Adversarial attacks on quantum machine learning. Physical Review A 103, 042427 (2021).
- Defence against adversarial attacks using classical and quantum-enhanced boltzmann machines. Machine Learning: Science and Technology 2, 045006 (2021).
- Benchmarking adversarially robust quantum machine learning at scale. arXiv:2211.12681 (2022).
- Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324 (1998).
- Training deep quantum neural networks. Nature Communications 11, 1–6 (2020).
- Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019).
- Quantum generative adversarial networks. Physical Review A 98, 012324 (2018).
- Quantum decision tree classifier. Quantum Information Processing 13, 757–770 (2014).
- Quantum autoencoders for efficient compression of quantum data. Quantum Science and Technology 2, 045001 (2017).
- Demonstration of quantum advantage in machine learning. npj Quantum Information 3, 1–5 (2017).
- Quantum advantage in learning from experiments. Science, 376, 1182–1186 (2022).
- Ledoux, M. The concentration of measure phenomenon, American Mathematical Soc., 89, (2001).
- Encoding-dependent generalization bounds for parametrized quantum circuits, Quantum, 5, 582, (2021).
- Generalization in quantum machine learning from few training data, Nat. Comm., 13, (2022).
- Generalization in Quantum Machine Learning: A Quantum Information Standpoint. PRX Quantum 2, 040321 (2021).
- Universal adversarial examples and perturbations for quantum classifiers. National Science Review 9, 6 (2022)
- Robust data encodings for quantum classifiers. Physical Review A 102, 032420 (2020).
- Generative adversarial networks: An overview. IEEE Signal Processing Magazine 35, 53–65 (2018).
- Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 15–26 (2017).
- Adversarial machine learning phases of matter. arXiv preprint arXiv:1910.13453 (2019).
- Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017).
- Thermometer encoding: One hot way to resist adversarial examples. In International Conference on Learning Representations (2018).
- Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 (2017).
- Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems 32 (2019).
- Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472–7482 (PMLR, 2019).
- Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), 656–672 (IEEE, 2019).
- Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 1310–1320 (PMLR, 2019).
- Scaling provable adversarial defenses. Advances in Neural Information Processing Systems 31 (2018).
- Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344 (2018).
- Verification of deep convolutional neural networks using imagestars. In Lahiri, S. K. & Wang, C. (eds.) Computer Aided Verification, 18–42 (Springer International Publishing, Cham, 2020).
- An abstraction-based framework for neural network verification. In International Conference on Computer Aided Verification, 43–65 (Springer, 2020).
- Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. CoRR abs/2005.07173 (2020). eprint 2005.07173.
- Kwiatkowska, M. Z. Safety verification for deep neural networks with provable guarantees Leibniz International Proceedings in Informatics (2019).
- Differential privacy in quantum computation. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), 249–262 (IEEE, 2017).
- Dwork, C. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation, 1–19 (Springer, 2008).
- Helstrom, C. W. Detection theory and quantum mechanics. Information and Control 10, 254–291 (1967).
- Holevo, A. S. Statistical decision theory for quantum systems. Journal of Multivariate Analysis 3, 337–394 (1973).
- Recent advances in adversarial training for adversarial robustness. arXiv preprint arXiv:2102.01356 (2021).
- Transfer of adversarial robustness between perturbation types. arXiv preprint arXiv:1905.01034 (2019).
- Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152 (2018).
- A rigorous and robust quantum speed-up in supervised machine learning. Nature Physics 17, 1013–1017 (2021).
- Schuld, M. Supervised quantum machine learning models are kernel methods. arXiv preprint arXiv:2101.11020 (2021).
- Quantum Neural Network Classifiers: A Tutorial Li, W., Lu, Z., & Deng, D.. SciPost Phys. Lect. Notes , 61 (2022).
- Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Machine Intelligence 2, 1–9 (2020).
- Data compression for quantum machine learning. arXiv preprint arXiv:2204.11170 (2022).
- A scalable and fast artificial neural network syndrome decoder for surface codes. arXiv preprint arXiv:2110.05854 (2021).
- Barren plateaus in quantum neural network training landscapes. Nature Communications 9, 1–6 (2018).
- Generative adversarial networks. Advances in Neural Information Processing systems 27 (2014).
- Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983 (2017).
- Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018).
- Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018).
- Ape-gan: Adversarial perturbation elimination with gan. arXiv preprint arXiv:1707.05474 (2017).
- Quantum generative adversarial learning. Physical review letters 121, 040502 (2018).
- Quantum generative adversarial networks for learning and loading random distributions. npj Quantum Information 5, 1–9 (2019).
- Machine learning of high dimensional data on a noisy quantum processor. npj Quantum Information 7, 1–5 (2021).
- Demonstration of non-markovian process characterisation and control on a quantum processor. Nat. Commun. 11 (2020).
- Surface codes: Towards practical large-scale quantum computation. Physical Review A 86, 032324 (2012).
- Acharya, R. Suppressing quantum errors by scaling a surface code logical qubit. arXiv preprint arXiv:2207.06431 (2022).
- URL https://research.ibm.com/blog/ibm-quantum-roadmap-2025.
- URL https://quantumai.google/learn/map.
- URL https://ionq.com/posts/december-09-2020-scaling-quantum-computer-roadmap.
- Cost-optimal single-qubit gate synthesis in the clifford hierarchy. Quantum 5 (2021).
- An efficient magic state approach to small angle rotations. Quantum Science and Technology 1 (2016).
- A unified framework for magic state distillation and multiqubit gate-synthesis with reduced resource cost. arXiv:1606.01904 (2021).