Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning (2111.14345v2)

Published 29 Nov 2021 in cs.LG

Abstract: Federated learning~(FL) facilitates the training and deploying AI models on edge devices. Preserving user data privacy in FL introduces several challenges, including expensive communication costs, limited resources, and data heterogeneity. In this paper, we propose SPATL, an FL method that addresses these issues by: (a) introducing a salient parameter selection agent and communicating selected parameters only; (b) splitting a model into a shared encoder and a local predictor, and transferring its knowledge to heterogeneous clients via the locally customized predictor. Additionally, we leverage a gradient control mechanism to further speed up model convergence and increase robustness of training processes. Experiments demonstrate that SPATL reduces communication overhead, accelerates model inference, and enables stable training processes with better results compared to state-of-the-art methods. Our approach reduces communication cost by up to $86.45\%$, accelerates local inference by reducing up to $39.7\%$ FLOPs on VGG-11, and requires $7.4 \times$ less communication overhead when training ResNet-20.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sixing Yu (12 papers)
  2. Phuong Nguyen (27 papers)
  3. Waqwoya Abebe (6 papers)
  4. Wei Qian (51 papers)
  5. Ali Anwar (65 papers)
  6. Ali Jannesari (56 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.