Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Aggregation-Free Federated Learning for Tackling Data Heterogeneity (2404.18962v1)

Published 29 Apr 2024 in cs.CV and cs.LG

Abstract: The performance of Federated Learning (FL) hinges on the effectiveness of utilizing knowledge from distributed datasets. Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round. This process can cause client drift, especially with significant cross-client data heterogeneity, impacting model performance and convergence of the FL algorithm. To address these challenges, we introduce FedAF, a novel aggregation-free FL algorithm. In this framework, clients collaboratively learn condensed data by leveraging peer knowledge, the server subsequently trains the global model using the condensed data and soft labels received from the clients. FedAF inherently avoids the issue of client drift, enhances the quality of condensed data amid notable data heterogeneity, and improves the global model performance. Extensive numerical studies on several popular benchmark datasets show FedAF surpasses various state-of-the-art FL algorithms in handling label-skew and feature-skew data heterogeneity, leading to superior global model accuracy and faster convergence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263, 2021.
  2. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, pages 214–223. PMLR, 2017.
  3. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51:22–45, 2015.
  4. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4750–4759, 2022.
  5. FedBE: Making bayesian model ensemble applicable to federated learning. In International Conference on Learning Representations, 2021.
  6. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
  7. Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3483–3491, 2018.
  8. Privacy for free: How does dataset condensation help privacy? In International Conference on Machine Learning, pages 5378–5396. PMLR, 2022.
  9. Learning federated visual prompt in null space for mri reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8064–8073, 2023.
  10. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10112–10121, 2022.
  11. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  12. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  13. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
  14. Learn from others and be yourself in heterogeneous federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10143–10153, 2022.
  15. Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning, pages 5132–5143. PMLR, 2020.
  16. Sliced wasserstein kernels for probability distributions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5258–5267, 2016.
  17. Sliced wasserstein auto-encoders. In International Conference on Learning Representations, 2019.
  18. Learning multiple layers of features from tiny images. Technical report, Toronto, ON, Canada, 2009.
  19. Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10713–10722, 2021a.
  20. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965–978. IEEE, 2022.
  21. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429–450, 2020a.
  22. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations, 2020b.
  23. FedBN: Federated learning on non-IID features via local batch normalization. In International Conference on Learning Representations, 2021b.
  24. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, 33:2351–2363, 2020.
  25. Meta knowledge condensation for federated learning. In The Eleventh International Conference on Learning Representations, 2023.
  26. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  27. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1406–1415, 2019.
  28. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579–2605, 2008.
  29. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems, 33:7611–7623, 2020.
  30. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021.
  31. Cafe: Learning to condense dataset by aligning features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12196–12205, 2022.
  32. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
  33. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
  34. FedDM: Iterative distribution matching for communication-efficient federated learning. In Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022), 2022.
  35. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10174–10183, 2022.
  36. Dataset condensation with differentiable siamese augmentation. In International Conference on Machine Learning, pages 12674–12685. PMLR, 2021.
  37. Dataset condensation with distribution matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6514–6523, 2023.
  38. Dataset condensation with gradient matching. In International Conference on Learning Representations, 2021.
  39. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.
  40. Federated learning on non-iid data: A survey. Neurocomputing, 465:371–390, 2021a.
  41. Data-free knowledge distillation for heterogeneous federated learning. In International conference on machine learning, pages 12878–12889. PMLR, 2021b.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuan Wang (251 papers)
  2. Huazhu Fu (185 papers)
  3. Renuga Kanagavelu (4 papers)
  4. Qingsong Wei (12 papers)
  5. Yong Liu (721 papers)
  6. Rick Siow Mong Goh (59 papers)
Citations (10)