Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LDP-Fed: Federated Learning with Local Differential Privacy (2006.03637v1)

Published 5 Jun 2020 in cs.LG, cs.CR, and stat.ML

Abstract: This paper presents LDP-Fed, a novel federated learning system with a formal privacy guarantee using local differential privacy (LDP). Existing LDP protocols are developed primarily to ensure data privacy in the collection of single numerical or categorical values, such as click count in Web access logs. However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable. To address this challenge in LDP-Fed, we design and develop two novel approaches. First, LDP-Fed's LDP Module provides a formal differential privacy guarantee for the repeated collection of model training parameters in the federated training of large-scale neural networks over multiple individual participants' private datasets. Second, LDP-Fed implements a suite of selection and filtering techniques for perturbing and sharing select parameter updates with the parameter server. We validate our system deployed with a condensed LDP protocol in training deep neural networks on public data. We compare this version of LDP-Fed, coined CLDP-Fed, with other state-of-the-art approaches with respect to model accuracy, privacy preservation, and system capabilities.

Citations (344)

Summary

  • The paper introduces LDP-Fed, a framework that integrates local differential privacy into federated learning to safeguard high-dimensional neural network updates.
  • It implements a selective parameter update strategy to balance privacy preservation with model performance.
  • Empirical results on FashionMNIST demonstrate that LDP-Fed achieves competitive accuracy compared to non-private and cryptographic privacy methods.

Federated Learning with Local Differential Privacy: An Overview of LDP-Fed

The paper "LDP-Fed: Federated Learning with Local Differential Privacy" introduces a federated learning system that leverages local differential privacy (LDP) to safeguard data privacy during the training of deep neural networks (DNNs). This approach is designed to address the limitations of existing local differential privacy protocols, which are insufficient for handling the high dimensionality and precision of model parameter updates in federated learning.

Challenges in Federated Learning and LDP

Federated learning allows multiple participants to collaboratively train a machine learning model without sharing their raw data with a central server. Instead, participants share model updates with a parameter server, which aggregates these updates. This setup, however, is vulnerable to privacy inference attacks, as adversaries may infer sensitive information from the shared parameter updates. Existing solutions, such as secure multiparty computation (SMC) or differentially private optimizers, either require a trusted party, leverage computationally expensive cryptographic methods, or do not adequately handle the high dimensional parameter vectors characteristics of DNN models.

LDP-Fed Approach

LDP-Fed is presented as an innovative solution that incorporates LDP into the federated learning workflow. This system features two main contributions:

  1. LDP Module: It introduces a Local Differential Privacy Module that ensures formal privacy guarantees when collecting model training parameters. This module enables participants to locally define their desired privacy level by setting a privacy budget, which dictates the level of noise added to their parameter updates. LDP-Fed extends traditional single value LDP techniques to accommodate the intricacies of high-dimensional continuous-valued parameter vectors.
  2. Selective Parameter Update Sharing: To manage the noise introduced by privacy-preserving perturbation, LDP-Fed implements a selective sharing approach. Instead of sharing complete parameter sets, participants share updates for specific layers or parameters in each round. This method optimizes the balance between privacy preservation and model accuracy.

Experimental Validation

LDP-Fed has been empirically validated using the FashionMNIST dataset against several baseline federated learning methods, including non-private federated learning, SMC, and differentially private stochastic gradient descent (DPSGD). The results demonstrate that LDP-Fed maintains competitive accuracy while providing a robust privacy guarantee. Specifically, the α\alpha-CLDP-Fed variant achieved top accuracies compared to other privacy-preserving approaches, highlighting the effectiveness of its selective parameter update strategy.

Implications and Future Directions

The introduction of LDP-Fed has significant implications for privacy-preserving machine learning. By enabling participants to set local privacy guarantees and adapt these settings dynamically, LDP-Fed offers greater flexibility and security in federated learning environments. Moreover, the system provides a scalable solution suitable for training complex models over large and diverse datasets without compromising data privacy.

Future research directions might explore enhancing the efficiency of the LDP module, optimizing the selection and sharing of parameter updates, and extending the framework to other types of machine learning models. Additionally, investigating the application of LDP-Fed in real-world settings with diverse privacy requirements and constraints could further validate its usability and adaptability.

In conclusion, LDP-Fed represents a substantive advancement in federated learning, paving the way for secure, scalable, and privacy-preserving collaborative model training.