Papers
Topics
Authors
Recent
2000 character limit reached

SECO: Secure Inference With Model Splitting Across Multi-Server Hierarchy (2404.16232v1)

Published 24 Apr 2024 in cs.CR and cs.DC

Abstract: In the context of prediction-as-a-service, concerns about the privacy of the data and the model have been brought up and tackled via secure inference protocols. These protocols are built up by using single or multiple cryptographic tools designed under a variety of different security assumptions. In this paper, we introduce SECO, a secure inference protocol that enables a user holding an input data vector and multiple server nodes deployed with a split neural network model to collaboratively compute the prediction, without compromising either party's data privacy. We extend prior work on secure inference that requires the entire neural network model to be located on a single server node, to a multi-server hierarchy, where the user communicates to a gateway server node, which in turn communicates to remote server nodes. The inference task is split across the server nodes and must be performed over an encrypted copy of the data vector. We adopt multiparty homomorphic encryption and multiparty garbled circuit schemes, making the system secure against dishonest majority of semi-honest servers as well as protecting the partial model structure from the user. We evaluate SECO on multiple models, achieving the reduction of computation and communication cost for the user, making the protocol applicable to user's devices with limited resources.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. J. Fan and F. Vercauteren, “Somewhat practical fully homomorphic encryption,” IACR Cryptol. ePrint Arch., vol. 2012, p. 144, 2012.
  2. A. C.-C. Yao, “How to generate and exchange secrets (extended abstract),” in FOCS, 1986.
  3. R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in International conference on machine learning, pp. 201–210, PMLR, 2016.
  4. E. Hesamifard, H. Takabi, and M. Ghasemi, “Cryptodl: Deep neural networks over encrypted data,” 2017.
  5. R. Dathathri, O. Saarikivi, H. Chen, K. Laine, K. Lauter, S. Maleki, M. Musuvathi, and T. Mytkowicz, “Chet: an optimizing compiler for fully-homomorphic neural-network inferencing,” in Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 142–156, 2019.
  6. A. Sanyal, M. J. Kusner, A. Gascón, and V. Kanade, “Tapas: Tricks to accelerate (encrypted) prediction as a service,” 2018.
  7. C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, “Gazelle: A low latency framework for secure neural network inference,” 01 2018.
  8. P. Mishra, R. T. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa, “Delphi: A cryptographic inference service for neural networks,” in IACR Cryptol. ePrint Arch., 2020.
  9. C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proceedings of the forty-first annual ACM symposium on Theory of computing, pp. 169–178, 2009.
  10. V. Nikolaenko, U. Weinsberg, S. Ioannidis, M. Joye, D. Boneh, and N. Taft, “Privacy-preserving ridge regression on hundreds of millions of records,” in 2013 IEEE Symposium on Security and Privacy, pp. 334–348, 2013.
  11. D. Wu and J. Haven, “Using homomorphic encryption for large scale statistical analysis,” FHE-SI-Report, Univ. Stanford, Tech. Rep. TR-dwu4, 2012.
  12. R. Bost, R. A. Popa, S. Tu, and S. Goldwasser, “Machine learning classification over encrypted data,” Cryptology ePrint Archive, 2014.
  13. J. W. Bos, K. Lauter, and M. Naehrig, “Private predictive analysis on encrypted medical data,” Journal of biomedical informatics, vol. 50, pp. 234–243, 2014.
  14. T. Graepel, K. Lauter, and M. Naehrig, “Ml confidential: Machine learning on encrypted data,” in International Conference on Information Security and Cryptology, pp. 1–21, Springer, 2012.
  15. I. Chillotti, N. Gama, M. Georgieva, and M. Izabachene, “Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds,” in international conference on the theory and application of cryptology and information security, pp. 3–33, Springer, 2016.
  16. F. Boemer, Y. Lao, R. Cammarota, and C. Wierzynski, “ngraph-he: A graph compiler for deep learning on homomorphically encrypted data,” 2018.
  17. C. Boura, N. Gama, and M. Georgieva, “Chimera: a unified framework for b/fv, tfhe and heaan fully homomorphic encryption and predictions for deep learning,” IACR Cryptol. ePrint Arch., vol. 2018, p. 758, 2018.
  18. B. Reagen, W.-S. Choi, Y. Ko, V. T. Lee, H.-H. S. Lee, G.-Y. Wei, and D. Brooks, “Cheetah: Optimizing and accelerating homomorphic encryption for private inference,” in 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 26–39, 2021.
  19. D. Demmler, T. Schneider, and M. Zohner, “Aby-a framework for efficient mixed-protocol secure two-party computation.,” in NDSS, 2015.
  20. P. Mohassel and P. Rindal, “Aby3: A mixed protocol framework for machine learning,” in Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 35–52, 2018.
  21. S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure computation for neural network training.,” Proc. Priv. Enhancing Technol., vol. 2019, no. 3, pp. 26–49, 2019.
  22. S. Wagh, S. Tople, F. Benhamouda, E. Kushilevitz, P. Mittal, and T. Rabin, “Falcon: Honest-majority maliciously secure framework for private deep learning,” arXiv preprint arXiv:2004.02229, 2020.
  23. F. Boemer, R. Cammarota, D. Demmler, T. Schneider, and H. Yalame, “Mp2ml: A mixed-protocol machine learning framework for private inference,” in Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, PPMLP’20, (New York, NY, USA), p. 43–45, Association for Computing Machinery, 2020.
  24. J. H. Cheon, A. Kim, M. Kim, and Y. Song, “Homomorphic encryption for arithmetic of approximate numbers,” in International conference on the theory and application of cryptology and information security, pp. 409–437, Springer, 2017.
  25. C. Thapa, M. A. P. Chamikara, S. Camtepe, and L. Sun, “Splitfed: When federated learning meets split learning,” 2022.
  26. G.-L. Pereteanu, A. Alansary, and J. Passerat-Palmbach, “Split he: Fast secure inference combining split learning and homomorphic encryption,” arXiv preprint arXiv:2202.13351, 2022.
  27. T. Khan, K. Nguyen, and A. Michalas, “Split ways: Privacy-preserving training of encrypted data using split learning,” 2023.
  28. P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv preprint arXiv:1812.00564, 2018.
  29. O. Gupta and R. Raskar, “Distributed learning of deep neural network over multiple agents,” Journal of Network and Computer Applications, vol. 116, pp. 1–8, 2018.
  30. C. Mouchet, J. Troncoso-Pastoriza, J.-P. Bossuat, and J.-P. Hubaux, “Multiparty homomorphic encryption from ring-learning-with-errors,” Proceedings on Privacy Enhancing Technologies, vol. 4, pp. 291–311, 2021.
  31. M. O. Rabin, “How to exchange secrets with oblivious transfer,” IACR Cryptol. ePrint Arch., vol. 2005, p. 187, 2005.
  32. S. Even, O. Goldreich, and A. Lempel, “A randomized protocol for signing contracts.,” vol. 28, pp. 205–210, 01 1982.
  33. M. Bellare, T. Hoang, and P. Rogaway, “Foundations of garbled circuits,” pp. 784–796, 10 2012.
  34. M. Bellare, V. T. Hoang, S. Keelveedhi, and P. Rogaway, “Efficient garbling from a fixed-key blockcipher,” in 2013 IEEE Symposium on Security and Privacy, pp. 478–492, 2013.
  35. Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC ’19, (New York, NY, USA), p. 148–162, Association for Computing Machinery, 2019.
  36. O. Goldreich, “Foundations of cryptography. ii: Basic applications,” vol. 2, 05 2004.
  37. Y. Lindell, “How to simulate it–a tutorial on the simulation proof technique,” Tutorials on the Foundations of Cryptography, pp. 277–346, 2017.
  38. Y. Lindell and B. Pinkas, “A proof of security of yao’s protocol for two-party computation,” Journal of Cryptology, vol. 22, pp. 161–188, 04 2009.
  39. M. O. Rabin, “How to exchange secrets with oblivious transfer,” 2005. Harvard University Technical Report 81 [email protected] 12955 received 21 Jun 2005.
  40. “Microsoft SEAL (release 3.6).” https://github.com/Microsoft/SEAL, Nov. 2020. Microsoft Research, Redmond, WA.
  41. D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “CrypTFlow2: Practical 2-party secure inference,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, ACM, oct 2020.
  42. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  43. J. Liu, M. Juuti, Y. Lu, and N. Asokan, “Oblivious neural network predictions via minionn transformations,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp. 619–631, 2017.
  44. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.