Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 40 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 470 tok/s Pro
Kimi K2 161 tok/s Pro
2000 character limit reached

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments (2104.14380v2)

Published 29 Apr 2021 in cs.CR, cs.DC, and cs.LG

Abstract: We propose and implement a Privacy-preserving Federated Learning ($PPFL$) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of our implementation shows that $PPFL$ can significantly improve privacy while incurring small system overheads at the client-side. In particular, $PPFL$ can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Furthermore, it can achieve comparable model utility with fewer communication rounds (0.54$\times$) and a similar amount of network traffic (1.002$\times$) compared to the standard federated learning of a complete model. This is achieved while only introducing up to ~15% CPU time, ~18% memory usage, and ~21% energy consumption overhead in $PPFL$'s client-side.

Citations (221)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces PPFL, a framework that leverages TEEs to secure local training and model aggregation in federated learning.
  • It employs a greedy layer-wise training approach to overcome memory limitations of TEEs while maintaining competitive performance.
  • Empirical results show reduced communication rounds and manageable overheads, highlighting PPFL’s potential for secure mobile systems.

Privacy-Preserving Federated Learning with Trusted Execution Environments

The paper introduces a framework named Privacy-preserving Federated Learning (PPFL) designed to enhance data privacy during federated learning (FL) on mobile systems. The proposed framework uses Trusted Execution Environments (TEEs), widely available in modern devices, to ensure security for both local training on client devices and model aggregation on servers. By structuring training around TEEs, PPFL addresses privacy leakages associated with federated learning.

Key Methodology and Innovations

  1. Utilization of TEEs: The framework leverages the security capabilities of TEEs to create a trustworthy space for sensitive computations. Client-side TEEs are used for local training, while server-side TEEs manage secure aggregation of model updates. This setup helps conceal model gradients from adversaries who might otherwise exploit them to infer private information.
  2. Greedy Layer-Wise Training: Given the memory limitations of current TEEs, PPFL employs a greedy layer-wise training approach. Each layer of the deep model is trained to convergence inside the TEE before moving to the next layer. This method ensures that all parts of the model benefit from the security offered by TEEs.
  3. Performance Metrics: The PPFL system demonstrates competitive performance with fewer communication rounds (about 0.54 times) and similar network traffic (1.002 times) compared to conventional federated learning. System overheads are manageable, with CPU time, memory usage, and energy consumption increasing by about 15%, 18%, and 21% respectively.

Experimental Validation

The framework was implemented using Intel SGX for server-side processing and Arm TrustZone for clients, and empirically validated on different models and datasets such as LeNet on MNIST and AlexNet on CIFAR10. The experiments indicate that layer-wise training can achieve privacy protections against data reconstruction and inference attacks while maintaining model utility.

Implications and Future Work

The paper highlights that employing TEEs in federated learning can significantly mitigate privacy risks without compromising on performance. This has significant implications for mobile and distributed systems where data privacy is paramount. Future research could explore the combination of PPFL with other privacy-enhancing technologies such as differential privacy or homomorphic encryption to further strengthen the privacy-utility trade-off. Additionally, examining the application of PPFL principles to other machine learning architectures and tasks could broaden its utility across diverse domains.

Overall, PPFL represents a significant step toward practical privacy-preserving federated learning systems by combining innovative training methodologies with advanced hardware security features.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.