Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Learning Hyper-Parameter Tuning from a System Perspective (2211.13656v1)

Published 24 Nov 2022 in cs.LG and cs.DC

Abstract: Federated learning (FL) is a distributed model training paradigm that preserves clients' data privacy. It has gained tremendous attention from both academia and industry. FL hyper-parameters (e.g., the number of selected clients and the number of training passes) significantly affect the training overhead in terms of computation time, transmission time, computation load, and transmission load. However, the current practice of manually selecting FL hyper-parameters imposes a heavy burden on FL practitioners because applications have different training preferences. In this paper, we propose FedTune, an automatic FL hyper-parameter tuning algorithm tailored to applications' diverse system requirements in FL training. FedTune iteratively adjusts FL hyper-parameters during FL training and can be easily integrated into existing FL systems. Through extensive evaluations of FedTune for diverse applications and FL aggregation algorithms, we show that FedTune is lightweight and effective, achieving 8.48%-26.75% system overhead reduction compared to using fixed FL hyper-parameters. This paper assists FL practitioners in designing high-performance FL training solutions. The source code of FedTune is available at https://github.com/DataSysTech/FedTune.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huanle Zhang (12 papers)
  2. Lei Fu (35 papers)
  3. Mi Zhang (85 papers)
  4. Pengfei Hu (54 papers)
  5. Xiuzhen Cheng (72 papers)
  6. Prasant Mohapatra (44 papers)
  7. Xin Liu (820 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com