GuaranTEE: Towards Attestable and Private ML with CCA (2404.00190v1)
Abstract: Machine-learning (ML) models are increasingly being deployed on edge devices to provide a variety of services. However, their deployment is accompanied by challenges in model privacy and auditability. Model providers want to ensure that (i) their proprietary models are not exposed to third parties; and (ii) be able to get attestations that their genuine models are operating on edge devices in accordance with the service agreement with the user. Existing measures to address these challenges have been hindered by issues such as high overheads and limited capability (processing/secure memory) on edge devices. In this work, we propose GuaranTEE, a framework to provide attestable private machine learning on the edge. GuaranTEE uses Confidential Computing Architecture (CCA), Arm's latest architectural extension that allows for the creation and deployment of dynamic Trusted Execution Environments (TEEs) within which models can be executed. We evaluate CCA's feasibility to deploy ML models by developing, evaluating, and openly releasing a prototype. We also suggest improvements to CCA to facilitate its use in protecting the entire ML deployment pipeline on edge devices.
- Tamas Ban. 2022. Attestation and Measured Boot. https://www.trustedfirmware.org/docs/Attestation_and_Measured_Boot.pdf
- Offline model guard: Secure and private ML on mobile devices. In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 460–465.
- Franziska Boenisch. 2021. A systematic review on model watermarking for neural networks. Frontiers in big Data 4 (2021), 729663.
- SANCTUARY: ARMing TrustZone with User-space Enclaves.. In NDSS.
- Buildroot. Accessed Feb 2024. buildroot. https://github.com/buildroot/buildroot
- Sok: Understanding the prevailing security vulnerabilities in trustzone-assisted TEE systems. In 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 1416–1432.
- Understanding real-world threats to deep learning models in Android apps. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. 785–799.
- Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International conference on machine learning. PMLR, 201–210.
- Yerbabuena: Securing deep learning inference data via enclave-based ternary model partitioning. arXiv preprint arXiv:1807.00969 (2018).
- Privacy analytics. 42, 2 (mar 2012), 94–98. https://doi.org/10.1145/2185376.2185390
- Darknight: A data privacy scheme for training and inference of deep neural networks. arXiv preprint arXiv:2006.01300 (2020).
- Model Protection: Real-time privacy-preserving inference service for model privacy at the edge. IEEE Transactions on Dependable and Secure Computing 19, 6 (2021), 4270–4284.
- Secure and Efficient Mobile DNN Using Trusted Execution Environments. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security. 274–285.
- A First Look at On-device Models in iOS Apps. ACM Transactions on Software Engineering and Methodology 33, 1 (2023), 1–30.
- Yujin Huang and Chunyang Chen. 2022. Smart app attack: hacking deep learning models in android apps. IEEE Transactions on Information Forensics and Security 17 (2022), 1827–1840.
- Design and verification of the arm confidential compute architecture. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). 465–484.
- Enabling Realms with the Arm Confidential Compute Architecture. ([n. d.]).
- A survey of deep neural network watermarking techniques. Neurocomputing 461 (2021), 171–193.
- Arm Limited. 2023a. Fixed Virtual Platforms. https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms
- Arm Limited. 2023b. Introducing Arm Confidential Compute Architecture. https://developer.arm.com/documentation/den0125/0300/Overview
- Arm Limited. 2023c. Realm Management Monitor Sepcification. https://developer.arm.com/documentation/den0137/latest/
- Arm Limited. 2023d. Reference Arm CCA integration stack Software User Guide. https://gitlab.arm.com/arm-reference-solutions/arm-reference-solutions-docs/-/blob/master/docs/aemfvp-a-rme/user-guide.rst
- Arm Limited. Accessed Feb 2024a. Arm Confidential Compute Architecture. https://www.arm.com/architecture/security-features/arm-confidential-compute-architecture
- Arm Limited. Accessed Feb 2024b. linux-cca. https://gitlab.arm.com/linux-arm/linux-cca
- Arm Limited. Accessed Feb 2024c. TrustZone for Cortex-A. https://www.arm.com/technologies/trustzone-for-cortex-a
- Performance Acceleration of Secure Machine Learning Computations for Edge Applications. In 2022 IEEE 28th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). IEEE, 138–147.
- Oblivious neural network predictions via minionn transformations. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 619–631.
- Provenance of Training without Training Data: Towards Privacy-Preserving DNN Model Ownership Verification. In Proceedings of the ACM Web Conference 2023. 1980–1990.
- MirrorNet: A TEE-Friendly Framework for Secure On-Device DNN Inference. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD). IEEE, 1–9.
- Darknetz: towards model privacy at the edge using trusted execution environments. In Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services. 161–174.
- SoK: machine learning with confidential computing. arXiv preprint arXiv:2208.10134 (2022).
- Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP). IEEE, 19–38.
- Oblivious neural network computing via homomorphic encryption. EURASIP Journal on Information Security 2007 (2007), 1–11.
- Sandro Pinto and Nuno Santos. 2019. Demystifying arm trustzone: A comprehensive survey. ACM computing surveys (CSUR) 51, 6 (2019), 1–36.
- Chameleon: A hybrid secure computation framework for machine learning applications. In Proceedings of the 2018 on Asia conference on computer and communications security. 707–721.
- Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps. In Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop. 1–9.
- SoK: Attestation in confidential computing. ResearchGate pre-print (2023).
- Sok: Hardware-supported trusted execution environments. arXiv preprint arXiv:2205.12742 (2022).
- Privacy-Preserving Personal Model Training. In 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI). 153–164. https://doi.org/10.1109/IoTDI.2018.00024
- {{\{{SOTER}}\}}: Guarding Black-box Inference for General Neural Networks at the Edge. In 2022 USENIX Annual Technical Conference (USENIX ATC 22). 723–738.
- ACAI: Extending Arm Confidential Computing Architecture Protection from CPUs to Accelerators. In 33rd USENIX Security Symposium (USENIX Security’24).
- LEAP: TrustZone Based Developer-Friendly TEE for Intelligent Mobile Apps. IEEE Transactions on Mobile Computing (2022).
- Deep Intellectual Property: A Survey. arXiv preprint arXiv:2304.14613 (2023).
- Shadownet: A secure and efficient on-device model inference system for convolutional neural networks. In 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 1596–1612.
- Mind your weight (s): A large-scale study on insufficient machine learning model protection in mobile apps. In 30th USENIX Security Symposium (USENIX Security 21). 1955–1972.
- TensorFlow. Accessed Feb 2024. MobilenetV1. https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
- Florian Tramer and Dan Boneh. 2018. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. In International Conference on Learning Representations.
- TrustedFirmware. Accessed Feb 2024a. TF-A. https://www.trustedfirmware.org/projects/tf-a
- TrustedFirmware. Accessed Feb 2024b. TF-RMM. https://www.trustedfirmware.org/projects/tf-rmm
- SEALion: A framework for neural network inference on encrypted data. arXiv preprint arXiv:1904.12840 (2019).
- A first look at deep learning apps on smartphones. In The World Wide Web Conference. 2125–2136.
- virtCCA: Virtualized Arm Confidential Compute Architecture with TrustZone. arXiv preprint arXiv:2306.11011 (2023).
- Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations. IEEE Transactions on Artificial Intelligence 3, 6 (2021), 908–923.
- SHELTER: Extending Arm CCA with Isolation in User Space. In 32nd USENIX Security Symposium (USENIX Security’23).
- No Privacy Left Outside: On the (In-) Security of TEE-Shielded DNN Partition for On-Device ML. In 2024 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 52–52.