Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Aware Personalized Federated Learning (2204.08069v1)

Published 17 Apr 2022 in cs.LG and cs.AI

Abstract: In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance compared with the existing counterparts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huili Chen (20 papers)
  2. Jie Ding (123 papers)
  3. Eric Tramel (2 papers)
  4. Shuang Wu (99 papers)
  5. Anit Kumar Sahu (35 papers)
  6. Salman Avestimehr (116 papers)
  7. Tao Zhang (481 papers)
Citations (24)