2000 character limit reached
Privacy-Enhancing Technologies for Artificial Intelligence-Enabled Systems (2404.03509v1)
Published 4 Apr 2024 in cs.CR
Abstract: AI models introduce privacy vulnerabilities to systems. These vulnerabilities may impact model owners or system users; they exist during model development, deployment, and inference phases, and threats can be internal or external to the system. In this paper, we investigate potential threats and propose the use of several privacy-enhancing technologies (PETs) to defend AI-enabled systems. We then provide a framework for PETs evaluation for a AI-enabled systems and discuss the impact PETs may have on system-level variables.
- 2022. IBM Global AI Adoption Index 2022. IBM Technology Adoption Report. IBM Corporation, Armonk, NY.
- An adversarial attack detection method in deep neural networks based on re-attacking approach. Multimedia Tools and Applications 80 (03 2021), 1–30. https://doi.org/10.1007/s11042-020-10261-5
- Homomorphic Encryption Security Standard. Technical Report. HomomorphicEncryption.org, Toronto, Canada.
- Gerard Andrews. 2021. What is Synthetic Data? NVIDIA (2021). https://blogs.nvidia.com/blog/2021/06/08/what-is-synthetic-data/
- Microsoft Azure. 2022. SGX enclaves. https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-computing-enclaves
- OpenFHE: Open-Source Fully Homomorphic Encryption Library. Cryptology ePrint Archive, Paper 2022/915. https://eprint.iacr.org/2022/915 https://eprint.iacr.org/2022/915.
- Recent Advances in Adversarial Training for Adversarial Robustness. arXiv:2102.01356 [cs.LG]
- Nick Bradley. 2015. The Threat Is Coming From Inside the Network: Insider Threats Outrank External Attacks. https://securityintelligence.com/the-threat-is-coming-from-inside-the-network/
- Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL]
- Why the Census Bureau Chose Differential Privacy. U.S. Census Bureau (2023). https://www2.census.gov/library/publications/decennial/2020/census-briefs/c2020br-03.pdf
- Rosario Cammarota. 2022. Intel HERACLES: Homomorphic Encryption Revolutionary Accelerator with Correctness for Learning-Oriented End-to-End Solutions. In Proceedings of the 2022 on Cloud Computing Security Workshop (Los Angeles, CA, USA) (CCSW’22). Association for Computing Machinery, New York, NY, USA, 3. https://doi.org/10.1145/3560810.3565290
- Ro Cammarota. 2023. Intel Labs Continues Focused Research and Standards Efforts to Make FHE Viable. Intel (2023).
- Extracting Training Data from Large Language Models. arXiv:2012.07805 [cs.CR]
- Nick Cavalancia. 2020. User and Entity Behavior Analytics (UEBA) explained. AT&T (2020). https://cybersecurity.att.com/blogs/security-essentials/user-entity-and-behavior-analytics-explained
- Mike Chapple. 2019. Privacy vs Confidentiality vs Security: What’s the Difference? https://edtechmagazine.com/higher/article/2019/10/security-privacy-and-confidentiality-whats-difference
- Machine Learning Security against Data Poisoning: Are We There Yet? arXiv:2204.05986 [cs.CR]
- Enable fully homomorphic encryption with Amazon SageMaker endpoints for secure, real-time inferencing. AWS Machine Learning Blog (2023). https://aws.amazon.com/blogs/machine-learning/enable-fully-homomorphic-encryption-with-amazon-sagemaker-endpoints-for-secure-real-time-inferencing/
- DARPA. 2020. Data Protection in Virtual Environments (DPRIVE).
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs.CL]
- Exposed! A Survey of Attacks on Private Data. Annual Review of Statistics and Its Application 4, 1 (2017), 61–84. https://doi.org/10.1146/annurev-statistics-060116-054123 arXiv:https://doi.org/10.1146/annurev-statistics-060116-054123
- Lindsay Ellis. 2023. ChatGPT Can Save You Hours at Work. Why Are Some Companies Banning It? Wall Street Journal (2023).
- ENISA. 2019. ENISA’s PETs Maturity Assessment Repository. https://www.enisa.europa.eu/publications/enisa2019s-pets-maturity-assessment-repository
- Gradient leakage attacks in federated learning. Artificial Intelligence Review (2023). https://doi.org/10.1007/s10462-023-10550-z
- Mark Gurman. 2023. Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak. Bloomberg (2023). https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak
- Karen Hao. 2019. Hackers trick a Tesla into veering into the wrong lane. MIT Technology Review (2019). https://www.technologyreview.com/2019/04/01/65915/hackers-trick-teslas-autopilot-into-veering-towards-oncoming-traffic/
- Adversarial Attacks on Neural Network Policies. arXiv:1702.02284 [cs.LG]
- Alex Hughes. 2023. ChatGPT: Everything you need to know about OpenAI’s GPT-4 tool. BBC Science Focus (2023). https://www.sciencefocus.com/future-technology/gpt-3
- Intel. [n.d.]a. Intel® Software Guard Extensions (Intel® SGX). https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html
- Intel. [n.d.]b. Strengthen Enclave Trust with Attestation. https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/attestation-services.html
- ISO. 2023a. ISO/IEC TR 27563:2023. (2023). https://www.iso.org/standard/80396.html
- ISO. 2023b. ISO/IEC WD 18033-8. (2023). https://www.iso.org/standard/83139.html
- Daniel Jakubovitz and Raja Giryes. 2019. Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization. arXiv:1803.08680 [cs.LG]
- BTS: An Accelerator for Bootstrappable Fully Homomorphic Encryption. In Proceedings of the 49th Annual International Symposium on Computer Architecture (New York, New York) (ISCA ’22). Association for Computing Machinery, New York, NY, USA, 711–725. https://doi.org/10.1145/3470496.3527415
- Optimization of Homomorphic Comparison Algorithm on RNS-CKKS Scheme. IEEE Access 10 (2022), 26163–26176. https://doi.org/10.1109/ACCESS.2022.3155882
- Attestation Mechanisms for Trusted Execution Environments Demystified. In Distributed Applications and Interoperable Systems. Springer International Publishing, 95–113. https://doi.org/10.1007/978-3-031-16092-9_7
- Microsoft. [n.d.]. What is data loss prevention (DLP)? https://www.microsoft.com/en-us/security/business/security-101/what-is-data-loss-prevention-dlp
- MITRE. [n.d.]. ATLAS Matrix. https://atlas.mitre.org/matrices/ATLAS/
- Arvind Narayanan and Vitaly Shmatikov. 2008. Robust De-anonymization of Large Sparse Datasets. In 2008 IEEE Symposium on Security and Privacy (sp 2008). 111–125. https://doi.org/10.1109/SP.2008.33
- Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments. arXiv:1912.03485 [cs.LG]
- NIST National Cybersecurity Center of Excellence. [n.d.]. Data Confidentiality: Identifying and Protecting Assets. https://www.nccoe.nist.gov/data-confidentiality-identifying-and-protecting-assets-and-data-against-data-breaches
- Differential Privacy for Privacy-Preserving Data Analysis. Cybersecurity Insights (A NIST Blog) (2020). https://www.nist.gov/blogs/cybersecurity-insights/differential-privacy-privacy-preserving-data-analysis-introduction-our
- U.S. Department of Health and Human Services. [n.d.]. Summary of the HIPAA Privacy Rule. https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html
- U.S. Department of Labor. [n.d.]. Guidance on the Protection of Personal Identifiable Information. https://www.dol.gov/general/ppii
- UCI Office of Research. [n.d.]. Privacy and Confidentiality. https://research.uci.edu/human-research-protections/research-subjects/privacy-and-confidentiality/
- OpenAI. 2022. Introducing ChatGPT. https://openai.com/blog/chatgpt
- A taxonomy and survey of attacks against machine learning. Computer Science Review 34 (2019), 100199. https://doi.org/10.1016/j.cosrev.2019.100199
- Privacy-preserving Deep Learning based Record Linkage. arXiv:2211.02161 [cs.CR]
- Kurt Rohloff. 2017. Homomorphic Encryption – Making it Real. Duality (2017). https://dualitytech.com/blog/homomorphic-encryption-making-it-real/
- Kurt Rohloff. 2022. Duality Advances Homomorphic Encryption Landscape with OpenFHE. Duality (2022). https://dualitytech.com/blog/duality-advances-homomorphic-encryption-landscape-with-openfhe/
- A Study of Split Learning Model. In 2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM). 1–4. https://doi.org/10.1109/IMCOM53663.2022.9721798
- Amazon Web Services. [n.d.]. AWS Nitro Enclaves. https://aws.amazon.com/ec2/nitro/nitro-enclaves/
- Comprehensive Performance Analysis of Homomorphic Cryptosystems for Practical Data Processing. arXiv:2202.02960 [cs.CR]
- WSJ Staff. 2021. Inside TikTok’s Algorithm: A WSJ Video Investigation. Wall Street Journal (2021). https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477
- Ben Wolford. [n.d.]. What is GDPR, the EU’s new data protection law? https://gdpr.eu/what-is-gdpr/
- Mitigating Adversarial Effects Through Randomization. arXiv:1711.01991 [cs.CV]
- Gradient Leakage Attacks in Federated Learning: Research Frontiers, Taxonomy and Future Directions. IEEE Network (2023), 1–8. https://doi.org/10.1109/MNET.001.2300140
- How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers. arXiv:2210.11049 [cs.CR]