Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coded Matrix Computations for D2D-enabled Linearized Federated Learning (2302.12305v1)

Published 23 Feb 2023 in cs.IT, cs.LG, and math.IT

Abstract: Federated learning (FL) is a popular technique for training a global model on data distributed across client devices. Like other distributed training techniques, FL is susceptible to straggler (slower or failed) clients. Recent work has proposed to address this through device-to-device (D2D) offloading, which introduces privacy concerns. In this paper, we propose a novel straggler-optimal approach for coded matrix computations which can significantly reduce the communication delay and privacy issues introduced from D2D data transmissions in FL. Moreover, our proposed approach leads to a considerable improvement of the local computation speed when the generated data matrix is sparse. Numerical evaluations confirm the superiority of our proposed method over baseline approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Anindya Bijoy Das (24 papers)
  2. Aditya Ramamoorthy (57 papers)
  3. David J. Love (98 papers)
  4. Christopher G. Brinton (109 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.