Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Sparsified Federated Neuroimaging Models via Weight Pruning (2208.11669v1)

Published 24 Aug 2022 in cs.LG, cs.CR, eess.IV, and q-bio.QM

Abstract: Federated training of large deep neural networks can often be restrictive due to the increasing costs of communicating the updates with increasing model sizes. Various model pruning techniques have been designed in centralized settings to reduce inference times. Combining centralized pruning techniques with federated training seems intuitive for reducing communication costs -- by pruning the model parameters right before the communication step. Moreover, such a progressive model pruning approach during training can also reduce training times/costs. To this end, we propose FedSparsify, which performs model pruning during federated training. In our experiments in centralized and federated settings on the brain age prediction task (estimating a person's age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated learning environments with highly heterogeneous data distributions. One surprising benefit of model pruning is improved model privacy. We demonstrate that models with high sparsity are less susceptible to membership inference attacks, a type of privacy attack.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dimitris Stripelis (19 papers)
  2. Umang Gupta (16 papers)
  3. Nikhil Dhinagar (4 papers)
  4. Greg Ver Steeg (95 papers)
  5. Paul Thompson (21 papers)
  6. José Luis Ambite (5 papers)

Summary

We haven't generated a summary for this paper yet.