Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Ensembler: Protect Collaborative Inference Privacy from Model Inversion Attack via Selective Ensemble (2401.10859v2)

Published 19 Jan 2024 in cs.CR and cs.LG

Abstract: For collaborative inference through a cloud computing platform, it is sometimes essential for the client to shield its sensitive information from the cloud provider. In this paper, we introduce Ensembler, an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks by adversarial parties. Ensembler leverages selective model ensemble on the adversarial server to obfuscate the reconstruction of the client's private information. Our experiments demonstrate that Ensembler can effectively shield input images from reconstruction attacks, even when the client only retains one layer of the network locally. Ensembler significantly outperforms baseline methods by up to 43.5% in structural similarity while only incurring 4.8% time overhead during inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  1877–1901. Curran Associates, Inc., 2020.
  2. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp.  248–255, 2009. doi: 10.1109/CVPR.2009.5206848.
  3. Patrol: Privacy-oriented pruning for collaborative inference against model inversion attacks, 2023.
  4. Inverting visual representations with convolutional networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.  4829–4837, 2016. doi: 10.1109/CVPR.2016.522.
  5. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
  6. Reconstructing training data from trained neural networks. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=Sxk8Bse3RKO.
  7. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC ’19, pp.  148–162, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450376280. doi: 10.1145/3359789.3359824. URL https://doi.org/10.1145/3359789.3359824.
  8. Membership inference attacks on machine learning: A survey. ACM Comput. Surv., 54(11s), sep 2022. ISSN 0360-0300. doi: 10.1145/3523273. URL https://doi.org/10.1145/3523273.
  9. Model complexity of deep learning: A survey. Knowl. Inf. Syst., 63(10):2585–2619, oct 2021. ISSN 0219-1377. doi: 10.1007/s10115-021-01605-0. URL https://doi.org/10.1007/s10115-021-01605-0.
  10. Privacy analysis in language models via training data leakage report. CoRR, abs/2101.05405, 2021. URL https://arxiv.org/abs/2101.05405.
  11. Highly accurate protein structure prediction with alphafold. Nature, 596:583 – 589, 2021. URL https://api.semanticscholar.org/CorpusID:235959867.
  12. Label-only model inversion attacks via boundary repulsion. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  15025–15033, Los Alamitos, CA, USA, jun 2022. IEEE Computer Society. doi: 10.1109/CVPR52688.2022.01462. URL https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01462.
  13. Model inversion attack by integration of deep generative models: Privacy-sensitive face generation from a face recognition system. IEEE Transactions on Information Forensics and Security, 17:357–372, 2022. doi: 10.1109/TIFS.2022.3140687.
  14. Crypten: Secure multi-party computation meets machine learning. In arXiv 2109.00984, 2021.
  15. Efficient fhe-based privacy-enhanced neural network for ai-as-a-service. Cryptology ePrint Archive, Paper 2023/647, 2023. URL https://eprint.iacr.org/2023/647. https://eprint.iacr.org/2023/647.
  16. Sentence embedding leaks more information than you expect: Generative embedding inversion attack to recover the whole sentence. In Findings of the Association for Computational Linguistics: ACL 2023, pp.  14022–14040, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.881. URL https://aclanthology.org/2023.findings-acl.881.
  17. Ressfl: A resistance transfer framework for defending model inversion attack in split federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  10194–10202, June 2022.
  18. Privacy and security issues in deep learning: A survey. IEEE Access, 9:4566–4593, 2021. doi: 10.1109/ACCESS.2020.3045078.
  19. Preva: Protecting inference privacy through policy-based video-frame transformation. In 2022 IEEE/ACM 7th Symposium on Edge Computing (SEC), pp.  175–188, 2022. doi: 10.1109/SEC54971.2022.00021.
  20. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu (eds.), Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pp.  1273–1282. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/mcmahan17a.html.
  21. Shredder: Learning noise distributions to protect inference privacy. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’20, pp.  3–18, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450371025. doi: 10.1145/3373376.3378522. URL https://doi.org/10.1145/3373376.3378522.
  22. Delphi: A cryptographic inference service for neural networks. In 29th USENIX Security Symposium (USENIX Security 20), pp.  2505–2522. USENIX Association, August 2020. ISBN 978-1-939133-17-5. URL https://www.usenix.org/conference/usenixsecurity20/presentation/mishra.
  23. Deep private-feature extraction. IEEE Transactions on Knowledge and Data Engineering, PP, 02 2018. doi: 10.1109/TKDE.2018.2878698.
  24. Split learning for collaborative deep learning in healthcare. CoRR, abs/1912.12115, 2019. URL http://arxiv.org/abs/1912.12115.
  25. Cryptflow2: Practical 2-party secure inference. Cryptology ePrint Archive, Paper 2020/1002, 2020. URL https://eprint.iacr.org/2020/1002. https://eprint.iacr.org/2020/1002.
  26. Cheetah: Optimizing and accelerating homomorphic encryption for private inference. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp.  26–39, 2021. doi: 10.1109/HPCA51647.2021.00013.
  27. Sok: Let the privacy games begin! a unified treatment of data inference privacy in machine learning. In 2023 IEEE Symposium on Security and Privacy (SP), Los Alamitos, CA, USA, may 2023. IEEE Computer Society. doi: 10.1109/SP46215.2023.10179281. URL https://doi.ieeecomputersociety.org/10.1109/SP46215.2023.10179281.
  28. A gan-based image transformation scheme for privacy-preserving deep neural networks. In 2020 28th European Signal Processing Conference (EUSIPCO), pp.  745–749, 2021. doi: 10.23919/Eusipco47968.2020.9287532.
  29. Information leakage in embedding models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, CCS ’20, pp.  377–390, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370899. doi: 10.1145/3372297.3417270. URL https://doi.org/10.1145/3372297.3417270.
  30. CryptGPU: Fast privacy-preserving machine learning on the gpu. In IEEE S&P, 2021.
  31. Q. Wang and D. Kurz. Reconstructing training data from diverse ml models by ensemble inversion. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.  3870–3878, Los Alamitos, CA, USA, jan 2022. IEEE Computer Society. doi: 10.1109/WACV51458.2022.00392. URL https://doi.ieeecomputersociety.org/10.1109/WACV51458.2022.00392.
  32. Piranha: A gpu platform for secure computation. Cryptology ePrint Archive, Paper 2022/892, 2022. URL https://eprint.iacr.org/2022/892. https://eprint.iacr.org/2022/892.
  33. Privacy-preserving machine learning: Methods, challenges and directions. CoRR, abs/2108.04417, 2021. URL https://arxiv.org/abs/2108.04417.
  34. Sparse black-box inversion attack with limited information. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  1–5, 2023. doi: 10.1109/ICASSP49357.2023.10095514.
  35. Secure federated learning against model poisoning attacks via client filtering. In ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 03 2023.
  36. Async-hfl: Efficient and robust asynchronous federated learning in hierarchical iot networks. In Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation, IoTDI ’23, pp.  236–248, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400700378. doi: 10.1145/3576842.3582377. URL https://doi.org/10.1145/3576842.3582377.
  37. Leakage of dataset properties in Multi-Party machine learning. In 30th USENIX Security Symposium (USENIX Security 21), pp.  2687–2704. USENIX Association, August 2021. ISBN 978-1-939133-24-3. URL https://www.usenix.org/conference/usenixsecurity21/presentation/zhang-wanrong.
  38. The secret revealer: Generative model-inversion attacks against deep neural networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  250–258, Los Alamitos, CA, USA, jun 2020. IEEE Computer Society. doi: 10.1109/CVPR42600.2020.00033. URL https://doi.ieeecomputersociety.org/10.1109/CVPR42600.2020.00033.
  39. Toward scalable and privacy-preserving deep neural network via algorithmic-cryptographic co-design. ACM Trans. Intell. Syst. Technol., 13(4), may 2022. ISSN 2157-6904. doi: 10.1145/3501809. URL https://doi.org/10.1145/3501809.
  40. CelebV-HQ: A large-scale video facial attributes dataset. In ECCV, 2022.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.