Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Pruning: Improving Neural Network Efficiency with Federated Learning (2209.06359v1)

Published 14 Sep 2022 in cs.LG and cs.AI

Abstract: Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns. Federated learning has been widely used and is considered to be an effective decentralized technique by collaboratively learning a shared prediction model while keeping the data local on different clients devices. However, the limited computation and communication resources on clients devices present practical difficulties for large models. To overcome such challenges, we propose Federated Pruning to train a reduced model under the federated setting, while maintaining similar performance compared to the full model. Moreover, the vast amount of clients data can also be leveraged to improve the pruning results compared to centralized training. We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Rongmei Lin (11 papers)
  2. Yonghui Xiao (15 papers)
  3. Tien-Ju Yang (16 papers)
  4. Ding Zhao (172 papers)
  5. Li Xiong (75 papers)
  6. Giovanni Motta (11 papers)
  7. Françoise Beaufays (60 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.