Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning (2310.09183v2)

Published 13 Oct 2023 in cs.LG, cs.AI, and cs.DC

Abstract: Classical federated learning (FL) enables training machine learning models without sharing data for privacy preservation, but heterogeneous data characteristic degrades the performance of the localized model. Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data. Such a global model may overlook the specific information that the clients have been sampled. In this paper, we propose a novel scheme to inject personalized prior knowledge into the global model in each client, which attempts to mitigate the introduced incomplete information problem in PFL. At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD), decoupling the personalized prior from the local objective function regularized by Bregman divergence for greater adaptability in personalized scenarios. We also relax the mirror descent (RMD) to extract the prior explicitly to provide optional strategies. Additionally, our pFedBreD is backed up by a convergence analysis. Sufficient experiments demonstrate that our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks. Extensive analyses verify the robustness and necessity of proposed designs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mingjia Shi (14 papers)
  2. Yuhao Zhou (78 papers)
  3. Kai Wang (624 papers)
  4. Huaizheng Zhang (15 papers)
  5. Shudong Huang (14 papers)
  6. Qing Ye (28 papers)
  7. Jiangcheng Lv (1 paper)
Citations (6)