Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices (2207.08988v1)

Published 18 Jul 2022 in cs.LG, cs.CL, and cs.CR

Abstract: Federated Learning (FL) is a technique to train models using data distributed across devices. Differential Privacy (DP) provides a formal privacy guarantee for sensitive data. Our goal is to train a large neural network LLM (NNLM) on compute-constrained devices while preserving privacy using FL and DP. However, the DP-noise introduced to the model increases as the model size grows, which often prevents convergence. We propose Partial Embedding Updates (PEU), a novel technique to decrease noise by decreasing payload size. Furthermore, we adopt Low Rank Adaptation (LoRA) and Noise Contrastive Estimation (NCE) to reduce the memory demands of large models on compute-constrained devices. This combination of techniques makes it possible to train large-vocabulary LLMs while preserving accuracy and privacy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Mingbin Xu (12 papers)
  2. Congzheng Song (23 papers)
  3. Ye Tian (190 papers)
  4. Neha Agrawal (2 papers)
  5. Filip Granqvist (7 papers)
  6. Rogier van Dalen (14 papers)
  7. Xiao Zhang (435 papers)
  8. Arturo Argueta (5 papers)
  9. Shiyi Han (7 papers)
  10. Yaqiao Deng (3 papers)
  11. Leo Liu (11 papers)
  12. Anmol Walia (2 papers)
  13. Alex Jin (4 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.