Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Model Federated Learning with Provable Guarantees (2207.04330v6)

Published 9 Jul 2022 in cs.LG, cs.DC, math.OC, and stat.ML

Abstract: Federated Learning (FL) is a variant of distributed learning where edge devices collaborate to learn a model without sharing their data with the central server or each other. We refer to the process of training multiple independent models simultaneously in a federated setting using a common pool of clients as multi-model FL. In this work, we propose two variants of the popular FedAvg algorithm for multi-model FL, with provable convergence guarantees. We further show that for the same amount of computation, multi-model FL can have better performance than training each model separately. We supplement our theoretical results with experiments in strongly convex, convex, and non-convex settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Neelkamal Bhuyan (4 papers)
  2. Sharayu Moharir (30 papers)
  3. Gauri Joshi (73 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.