Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Elastically-Constrained Meta-Learner for Federated Learning (2306.16703v3)

Published 29 Jun 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Federated learning is an approach to collaboratively training machine learning models for multiple parties that prohibit data sharing. One of the challenges in federated learning is non-IID data between clients, as a single model can not fit the data distribution for all clients. Meta-learning, such as Per-FedAvg, is introduced to cope with the challenge. Meta-learning learns shared initial parameters for all clients. Each client employs gradient descent to adapt the initialization to local data distributions quickly to realize model personalization. However, due to non-convex loss function and randomness of sampling update, meta-learning approaches have unstable goals in local adaptation for the same client. This fluctuation in different adaptation directions hinders the convergence in meta-learning. To overcome this challenge, we use the historical local adapted model to restrict the direction of the inner loop and propose an elastic-constrained method. As a result, the current round inner loop keeps historical goals and adapts to better solutions. Experiments show our method boosts meta-learning convergence and improves personalization without additional calculation and communication. Our method achieved SOTA on all metrics in three public datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Peng Lan (5 papers)
  2. Donglai Chen (2 papers)
  3. Chong Xie (4 papers)
  4. Keshu Chen (1 paper)
  5. Jinyuan He (4 papers)
  6. Juntao Zhang (10 papers)
  7. Yonghong Chen (12 papers)
  8. Yan Xu (258 papers)