Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model (2304.11511v2)

Published 23 Apr 2023 in quant-ph, cs.CR, and cs.LG

Abstract: Security has always been a critical issue in ML applications. Due to the high cost of model training -- such as collecting relevant samples, labeling data, and consuming computing power -- model-stealing attack is one of the most fundamental but vitally important issues. When it comes to quantum computing, such a quantum machine learning (QML) model-stealing attack also exists and is even more severe because the traditional encryption method, such as homomorphic encryption can hardly be directly applied to quantum computation. On the other hand, due to the limited quantum computing resources, the monetary cost of training QML model can be even higher than classical ones in the near term. Therefore, a well-tuned QML model developed by a third-party company can be delegated to a quantum cloud provider as a service to be used by ordinary users. In this case, the QML model will likely be leaked if the cloud provider is under attack. To address such a problem, we propose a novel framework, namely QuMoS, to preserve model security. We propose to divide the complete QML model into multiple parts and distribute them to multiple physically isolated quantum cloud providers for execution. As such, even if the adversary in a single provider can obtain a partial model, it does not have sufficient information to retrieve the complete model. Although promising, we observed that an arbitrary model design under distributed settings cannot provide model security. We further developed a reinforcement learning-based security engine, which can automatically optimize the model design under the distributed setting, such that a good trade-off between model performance and security can be made. Experimental results on four datasets show that the model design proposed by QuMoS can achieve competitive performance while providing the highest security than the baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell et al., “Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, no. 7779, pp. 505–510, 2019.
  2. R. Bao et al., “Fast oscar and owl with safe screening rules,” in Thirty-seventh International Conference on Machine Learning (ICML 2020), 2020.
  3. R. Bao, B. Gu et al., “An accelerated doubly stochastic gradient method with faster explicit model identification,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 57–66.
  4. R. Bao, B. Gu, and H. Huang, “Efficient approximate solution path algorithm for order weight l_1-norm with accuracy guarantee,” in 2019 IEEE International Conference on Data Mining (ICDM).   IEEE, 2019, pp. 958–963.
  5. R. Bao, X. Wu, W. Xian, and H. Huang, “Doubly sparse asynchronous learning for stochastic composite optimization,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, 2022, pp. 1916–1922.
  6. A. Broadbent, J. Fitzsimons, and E. Kashefi, “Universal blind quantum computation,” in 2009 50th Annual IEEE Symposium on Foundations of Computer Science.   IEEE, 2009, pp. 517–526.
  7. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  8. Y. Cao, J. Romero, J. P. Olson, M. Degroote, P. D. Johnson, M. Kieferová, I. D. Kivlichan, T. Menke, B. Peropadre, N. P. Sawaya et al., “Quantum chemistry in the age of quantum computing,” Chemical reviews, vol. 119, no. 19, pp. 10 856–10 915, 2019.
  9. M. Cerezo et al., “Challenges and opportunities in quantum machine learning,” Nature Computational Science, vol. 2, no. 9, pp. 567–576, 2022.
  10. S. Y.-C. Chen et al., “Variational quantum circuits for deep reinforcement learning,” IEEE Access, vol. 8, pp. 141 007–141 024, 2020.
  11. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  12. E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,” arXiv preprint arXiv:1411.4028, 2014.
  13. J. F. Fitzsimons et al., “Private quantum computation: an introduction to blind quantum computing and related protocols,” npj Quantum Information, vol. 3, no. 1, pp. 1–11, 2017.
  14. R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in International Conference on Machine Learning, 2016, pp. 201–210.
  15. L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the 28th Annual ACM Symposium on the Theory of Computing.   ACM, 1996, pp. 212–219.
  16. S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
  17. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  18. Y. He, X. Zhang, and J. Sun, “Channel pruning for accelerating very deep neural networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1389–1397.
  19. Z. Hu et al., “Quantum neural network compression,” in Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022, pp. 1–9.
  20. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  21. H.-L. Huang et al., “Experimental blind quantum computing for a classical client,” Physical review letters, vol. 119, no. 5, p. 050503, 2017.
  22. S. Huang, B. Lei, D. Xu, H. Peng, Y. Sun, M. Xie, and C. Ding, “Dynamic sparse training via balancing the exploration-exploitation trade-off,” arXiv preprint arXiv:2211.16667, 2022.
  23. W. Jiang, J. Xiong, and Y. Shi, “A co-design framework of neural networks and quantum circuits towards quantum advantage,” arXiv preprint arXiv:2006.14815, 2020.
  24. ——, “A co-design framework of neural networks and quantum circuits towards quantum advantage,” Nature communications, vol. 12, no. 1, pp. 1–13, 2021.
  25. ——, “When machine learning meets quantum computers: A case study,” in 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC).   IEEE, 2021, pp. 593–598.
  26. C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, “{{\{{GAZELLE}}\}}: A low latency framework for secure neural network inference,” in 27th USENIX Security Symposium (USENIX Security 18), 2018, pp. 1651–1669.
  27. X. Kan, H. Cui, J. Lukemire, Y. Guo, and C. Yang, “Fbnetgen: Task-aware gnn-based fmri analysis via functional brain network generation,” in International Conference on Medical Imaging with Deep Learning.   PMLR, 2022, pp. 618–637.
  28. X. Kan, W. Dai, H. Cui, Z. Zhang, Y. Guo, and C. Yang, “Brain network transformer,” arXiv preprint arXiv:2210.06681, 2022.
  29. A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,” Nature, vol. 549, no. 7671, pp. 242–246, 2017.
  30. R. Krishnamoorthi, “Quantizing deep convolutional networks for efficient inference: A whitepaper,” arXiv preprint arXiv:1806.08342, 2018.
  31. Z. Liang, J. Cheng, H. Ren, H. Wang, F. Hua, Y. Ding, F. Chong, S. Han, Y. Shi, and X. Qian, “Pan: Pulse ansatz on nisq machines,” arXiv preprint arXiv:2208.01215, 2022.
  32. Z. Liang, Z. Song, J. Cheng, Z. He, J. Liu, H. Wang, R. Qin, Y. Wang, S. Han, X. Qian et al., “Hybrid gate-pulse model for variational quantum algorithms,” arXiv preprint arXiv:2212.00661, 2022.
  33. Z. Liang, H. Wang, J. Cheng, Y. Ding, H. Ren, Z. Gao, Z. Hu, D. S. Boning, X. Qian, S. Han et al., “Variational quantum pulse learning,” in 2022 IEEE International Conference on Quantum Computing and Engineering (QCE).   IEEE, 2022, pp. 556–565.
  34. Z. Liang, Z. Wang, J. Yang, L. Yang, Y. Shi, and W. Jiang, “Can noise on qubits be learned in quantum neural network? a case study on quantumflow,” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).   IEEE, 2021, pp. 1–7.
  35. Y. Liu et al., “A rigorous and robust quantum speed-up in supervised machine learning,” Nature Physics, vol. 17, no. 9, pp. 1013–1017, 2021.
  36. S. Lloyd, M. Mohseni, and P. Rebentrost, “Quantum algorithms for supervised and unsupervised machine learning,” arXiv preprint arXiv:1307.0411, 2013.
  37. U. Mahadev et al., “Classical homomorphic encryption for quantum circuits,” SIAM Journal on Computing, no. 0, pp. FOCS18–189, 2020.
  38. H. Peng, D. Gurevin, S. Huang, T. Geng, W. Jiang, O. Khan, and C. Ding, “Towards sparsification of graph neural networks,” in 2022 IEEE 40th International Conference on Computer Design (ICCD).   IEEE, 2022, pp. 272–279.
  39. H. Peng, S. Huang, S. Chen, B. Li, T. Geng, A. Li, W. Jiang, W. Wen, J. Bi, H. Liu et al., “A length adaptive algorithm-hardware co-design of transformer on fpga through sparse attention and dynamic pipelining,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 2022, pp. 1135–1140.
  40. H. Peng, S. Zhou, Y. Luo, N. Xu, S. Duan, R. Ran, J. Zhao, S. Huang, X. Xie, C. Wang et al., “Rrnet: Towards relu-reduced neural network for two-party computation based private inference,” arXiv preprint arXiv:2302.02292, 2023.
  41. H. Peng, S. Zhou, S. Weitze, J. Li, S. Islam, T. Geng, A. Li, W. Zhang, M. Song, M. Xie et al., “Binary complex neural network acceleration on fpga,” in 2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP).   IEEE, 2021, pp. 85–92.
  42. A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature communications, vol. 5, no. 1, p. 4213, 2014.
  43. P. Rebentrost, M. Mohseni, and S. Lloyd, “Quantum support vector machine for big data classification,” Physical review letters, vol. 113, no. 13, p. 130503, 2014.
  44. M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, “Chameleon: A hybrid secure computation framework for machine learning applications,” in Proceedings of the 2018 on Asia conference on computer and communications security, 2018, pp. 707–721.
  45. J. Romero and A. Aspuru-Guzik, “Variational quantum generators: Generative adversarial quantum machine learning for continuous distributions,” Advanced Quantum Technologies, vol. 4, no. 1, p. 2000003, 2021.
  46. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  47. M. Schuld, A. Bocharov, K. M. Svore, and N. Wiebe, “Circuit-centric quantum classifiers,” Physical Review A, vol. 101, no. 3, p. 032308, 2020.
  48. M. Schuld et al., “Quantum machine learning in feature hilbert spaces,” Physical review letters, vol. 122, no. 4, p. 040504, 2019.
  49. M. Schuld, R. Sweke, and J. J. Meyer, “Effect of data encoding on the expressive power of variational quantum-machine-learning models,” Physical Review A, vol. 103, no. 3, p. 032430, 2021.
  50. S. Shen et al., “From distributed machine learning to federated learning: In the view of data privacy and security,” Concurrency and Computation: Practice and Experience, vol. 34, no. 16, p. e6002, 2022.
  51. P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM review, vol. 41, no. 2, pp. 303–332, 1999.
  52. S. Sim, P. D. Johnson, and A. Aspuru-Guzik, “Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms,” Advanced Quantum Technologies, vol. 2, no. 12, p. 1900070, 2019.
  53. F. Tacchino et al., “An artificial neuron implemented on an actual quantum processor,” npj Quantum Information, vol. 5, no. 1, pp. 1–8, 2019.
  54. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
  55. H. Wang, Y. Ding, J. Gu, Y. Lin, D. Z. Pan, F. T. Chong, and S. Han, “Quantumnas: Noise-adaptive search for robust quantum circuits,” in 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA).   IEEE, 2022, pp. 692–708.
  56. H. Wang, J. Gu, Y. Ding, Z. Li, F. T. Chong, D. Z. Pan, and S. Han, “Quantumnat: quantum noise-aware training with noise injection, quantization and normalization,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 2022, pp. 1–6.
  57. H. Wang, Z. Li, J. Gu, Y. Ding, D. Z. Pan, and S. Han, “Qoc: quantum on-chip training with parameter shift and gradient pruning,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 2022, pp. 655–660.
  58. H. Wang, P. Liu, J. Cheng, Z. Liang, J. Gu, Z. Li, Y. Ding, W. Jiang, Y. Shi, X. Qian et al., “Quest: Graph transformer for quantum circuit reliability estimation,” arXiv preprint arXiv:2210.16724, 2022.
  59. K. Wang, Z. Liu, Y. Lin, J. Lin, and S. Han, “Haq: Hardware-aware automated quantization with mixed precision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8612–8620.
  60. Z. Wang et al., “Exploration of quantum neural architecture by mixing quantum neuron designs,” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).   IEEE, 2021, pp. 1–7.
  61. Z. Wang, Y. Wu, Z. Jia, Y. Shi, and J. Hu, “Lightweight run-time working memory compression for deployment of deep neural networks on resource-constrained mcus,” in Proceedings of the 26th Asia and South Pacific Design Automation Conference, 2021, pp. 607–614.
  62. Y. Wu, Z. Wang, Z. Jia, Y. Shi, and J. Hu, “Intermittent inference with nonuniformly compressed multi-exit neural network for energy harvesting powered devices,” in 2020 57th ACM/IEEE Design Automation Conference (DAC).   IEEE, 2020, pp. 1–6.
  63. Y. Wu, Z. Wang, Y. Shi, and J. Hu, “Enabling on-device cnn training by self-supervised instance filtering and error map pruning,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 11, pp. 3445–3457, 2020.
  64. Y. Wu, Z. Wang, D. Zeng, M. Li, Y. Shi, and J. Hu, “Decentralized unsupervised learning of visual representations,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, 2022, pp. 2326–2333.
  65. Y. Wu, Z. Wang, D. Zeng, Y. Shi et al., “Synthetic data can also teach: Synthesizing effective data for unsupervised visual representation learning,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023.
  66. Y. Wu, Z. Wang, D. Zeng, Y. Shi, and J. Hu, “Enabling on-device self-supervised contrastive learning with selective data contrast,” in 2021 58th ACM/IEEE Design Automation Conference (DAC).   IEEE, 2021, pp. 655–660.
  67. Y. Wu, D. Zeng, Z. Wang et al., “Federated contrastive learning for volumetric medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part III 24.   Springer, 2021, pp. 367–377.
  68. Y. Wu, D. Zeng, Z. Wang, Y. Sheng, L. Yang, A. J. James, Y. Shi, and J. Hu, “Federated contrastive learning for dermatological disease diagnosis via on-device learning,” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).   IEEE, 2021, pp. 1–7.
  69. Y. Wu, D. Zeng, Z. Wang, Y. Shi, and J. Hu, “Distributed contrastive learning for medical image segmentation,” Medical Image Analysis, vol. 81, p. 102564, 2022.
  70. R. Xu, Y. Yu, H. Cui, X. Kan, Y. Zhu, J. Ho, C. Zhang, and C. Yang, “Neighborhood-regularized self-training for learning with few labels,” arXiv preprint arXiv:2301.03726, 2023.
  71. B. Yurke and J. S. Denker, “Quantum network theory,” Physical review A, vol. 29, no. 3, p. 1419, 1984.
  72. C. Zhan and H. Gupta, “Transmitter localization using quantum sensor networks,” arXiv preprint arXiv:2211.02260, 2022.
  73. Y. Zhang, R. Bao, J. Pei, and H. Huang, “Toward unified data and algorithm fairness via adversarial data augmentation and adaptive model fine-tuning,” in 2022 IEEE International Conference on Data Mining (ICDM).   IEEE, 2022, pp. 1317–1322.
  74. H. Zheng et al., “Benchmarking variational quantum circuits with permutation symmetry,” arXiv preprint arXiv:2211.12711, 2022.
  75. H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo, J. Qin, D. Wu, X. Ding, Y. Hu et al., “Quantum computational advantage using photons,” Science, 2020.
Citations (6)

Summary

We haven't generated a summary for this paper yet.