Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Auxo: Efficient Federated Learning via Scalable Client Clustering (2210.16656v2)

Published 29 Oct 2022 in cs.LG and cs.DC

Abstract: Federated learning (FL) is an emerging ML paradigm that enables heterogeneous edge devices to collaboratively train ML models without revealing their raw data to a logically centralized server. However, beyond the heterogeneous device capacity, FL participants often exhibit differences in their data distributions, which are not independent and identically distributed (Non-IID). Many existing works present point solutions to address issues like slow convergence, low final accuracy, and bias in FL, all stemming from client heterogeneity. In this paper, we explore an additional layer of complexity to mitigate such heterogeneity by grouping clients with statistically similar data distributions (cohorts). We propose Auxo to gradually identify such cohorts in large-scale, low-availability, and resource-constrained FL populations. Auxo then adaptively determines how to train cohort-specific models in order to achieve better model performance and ensure resource efficiency. Our extensive evaluations show that, by identifying cohorts with smaller heterogeneity and performing efficient cohort-based training, Auxo boosts various existing FL solutions in terms of final accuracy (2.1% - 8.2%), convergence time (up to 2.2x), and model bias (4.8% - 53.8%).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiachen Liu (45 papers)
  2. Fan Lai (27 papers)
  3. Yinwei Dai (5 papers)
  4. Aditya Akella (44 papers)
  5. Harsha Madhyastha (1 paper)
  6. Mosharaf Chowdhury (39 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.