Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedFwd: Federated Learning without Backpropagation (2309.01150v1)

Published 3 Sep 2023 in cs.LG and cs.AI

Abstract: In federated learning (FL), clients with limited resources can disrupt the training efficiency. A potential solution to this problem is to leverage a new learning procedure that does not rely on backpropagation (BP). We present a novel approach to FL called FedFwd that employs a recent BP-free method by Hinton (2022), namely the Forward Forward algorithm, in the local training process. FedFwd can reduce a significant amount of computations for updating parameters by performing layer-wise local updates, and therefore, there is no need to store all intermediate activation values during training. We conduct various experiments to evaluate FedFwd on standard datasets including MNIST and CIFAR-10, and show that it works competitively to other BP-dependent FL methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Seonghwan Park (3 papers)
  2. Dahun Shin (2 papers)
  3. Jinseok Chung (3 papers)
  4. Namhoon Lee (19 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.