Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Promoting Data and Model Privacy in Federated Learning through Quantized LoRA (2406.10976v1)

Published 16 Jun 2024 in cs.LG, cs.CL, and cs.CR

Abstract: Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process. However, the development of LLMs requires substantial data and computational resources, rendering them valuable intellectual properties for their developers and owners. To establish a mechanism that protects both data and model privacy in a federated learning context, we introduce a method that just needs to distribute a quantized version of the model's parameters during training. This method enables accurate gradient estimations for parameter updates while preventing clients from accessing a model whose performance is comparable to the centrally hosted one. Moreover, we combine this quantization strategy with LoRA, a popular and parameter-efficient fine-tuning method, to significantly reduce communication costs in federated learning. The proposed framework, named \textsc{FedLPP}, successfully ensures both data and model privacy in the federated learning context. Additionally, the learned central model exhibits good generalization and can be trained in a resource-efficient manner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jianhao Zhu (4 papers)
  2. Changze Lv (22 papers)
  3. Xiaohua Wang (26 papers)
  4. Muling Wu (13 papers)
  5. Wenhao Liu (83 papers)
  6. Tianlong Li (13 papers)
  7. Zixuan Ling (8 papers)
  8. Cenyuan Zhang (10 papers)
  9. Xiaoqing Zheng (44 papers)
  10. Xuanjing Huang (287 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com