Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning (2303.06360v1)

Published 11 Mar 2023 in cs.LG, cs.AI, cs.DC, and cs.MA

Abstract: Federated learning (FL) has prevailed as an efficient and privacy-preserved scheme for distributed learning. In this work, we mainly focus on the optimization of computation and communication in FL from a view of pruning. By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models. Two specific schemes of FedLP are designed for scenarios with homogeneous local models and heterogeneous ones. Both theoretical and experimental evaluations are developed to verify that FedLP relieves the system bottlenecks of communication and computation with marginal performance decay. To the best of our knowledge, FedLP is the first framework that formally introduces the layer-wise pruning into FL. Within the scope of federated learning, more variants and combinations can be further designed based on FedLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zheqi Zhu (7 papers)
  2. Yuchen Shi (23 papers)
  3. Jiajun Luo (11 papers)
  4. Fei Wang (574 papers)
  5. Chenghui Peng (19 papers)
  6. Pingyi Fan (137 papers)
  7. Khaled B. Letaief (209 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.