- The paper introduces PPFL, a framework that leverages TEEs to secure local training and model aggregation in federated learning.
- It employs a greedy layer-wise training approach to overcome memory limitations of TEEs while maintaining competitive performance.
- Empirical results show reduced communication rounds and manageable overheads, highlighting PPFL’s potential for secure mobile systems.
Privacy-Preserving Federated Learning with Trusted Execution Environments
The paper introduces a framework named Privacy-preserving Federated Learning (PPFL) designed to enhance data privacy during federated learning (FL) on mobile systems. The proposed framework uses Trusted Execution Environments (TEEs), widely available in modern devices, to ensure security for both local training on client devices and model aggregation on servers. By structuring training around TEEs, PPFL addresses privacy leakages associated with federated learning.
Key Methodology and Innovations
- Utilization of TEEs: The framework leverages the security capabilities of TEEs to create a trustworthy space for sensitive computations. Client-side TEEs are used for local training, while server-side TEEs manage secure aggregation of model updates. This setup helps conceal model gradients from adversaries who might otherwise exploit them to infer private information.
- Greedy Layer-Wise Training: Given the memory limitations of current TEEs, PPFL employs a greedy layer-wise training approach. Each layer of the deep model is trained to convergence inside the TEE before moving to the next layer. This method ensures that all parts of the model benefit from the security offered by TEEs.
- Performance Metrics: The PPFL system demonstrates competitive performance with fewer communication rounds (about 0.54 times) and similar network traffic (1.002 times) compared to conventional federated learning. System overheads are manageable, with CPU time, memory usage, and energy consumption increasing by about 15%, 18%, and 21% respectively.
Experimental Validation
The framework was implemented using Intel SGX for server-side processing and Arm TrustZone for clients, and empirically validated on different models and datasets such as LeNet on MNIST and AlexNet on CIFAR10. The experiments indicate that layer-wise training can achieve privacy protections against data reconstruction and inference attacks while maintaining model utility.
Implications and Future Work
The paper highlights that employing TEEs in federated learning can significantly mitigate privacy risks without compromising on performance. This has significant implications for mobile and distributed systems where data privacy is paramount. Future research could explore the combination of PPFL with other privacy-enhancing technologies such as differential privacy or homomorphic encryption to further strengthen the privacy-utility trade-off. Additionally, examining the application of PPFL principles to other machine learning architectures and tasks could broaden its utility across diverse domains.
Overall, PPFL represents a significant step toward practical privacy-preserving federated learning systems by combining innovative training methodologies with advanced hardware security features.