Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedL2P: Federated Learning to Personalize (2310.02420v1)

Published 3 Oct 2023 in cs.LG, cs.CV, and cs.DC

Abstract: Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Royson Lee (19 papers)
  2. Minyoung Kim (34 papers)
  3. Da Li (95 papers)
  4. Xinchi Qiu (26 papers)
  5. Timothy Hospedales (101 papers)
  6. Nicholas D. Lane (97 papers)
  7. Ferenc Huszár (26 papers)