Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stochastic Approximation Approach to Federated Machine Learning

Published 20 Feb 2024 in cs.LG | (2402.12945v1)

Abstract: This paper examines Federated learning (FL) in a Stochastic Approximation (SA) framework. FL is a collaborative way to train neural network models across various participants or clients without centralizing their data. Each client will train a model on their respective data and send the weights across to a the server periodically for aggregation. The server aggregates these weights which are then used by the clients to re-initialize their neural network and continue the training. SA is an iterative algorithm that uses approximate sample gradients and tapering step size to locate a minimizer of a cost function. In this paper the clients use a stochastic approximation iterate to update the weights of its neural network. It is shown that the aggregated weights track an autonomous ODE. Numerical simulations are performed and the results are compared with standard algorithms like FedAvg and FedProx. It is observed that the proposed algorithm is robust and gives more reliable estimates of the weights, in particular when the clients data are not identically distributed.

Authors (2)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. J. Konečný, B. McMahan, and D. Ramage, “Federated optimization: Distributed optimization beyond the datacenter,” CoRR, vol. abs/1511.03575, 2015.
  2. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in International Conference on Artificial Intelligence and Statistics, 2016.
  3. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, and M. B. et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  4. N. Rieke, J. Hancox, W. Li, F. Milletarì, and R. et al., “The future of digital health with federated learning,” vol. 3, no. 1, p. 119.
  5. G. Shingi, “A federated learning based approach for loan defaults prediction,” 2020 International Conference on Data Mining Workshops (ICDMW), pp. 362–368, 2020.
  6. A. K. Sahu, T. Li, M. Sanjabi, M. Zaheer, A. Talwalkar, and V. Smith, “On the convergence of federated optimization in heterogeneous networks,” CoRR, vol. abs/1812.06127, 2018.
  7. J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Red Hook, NY, USA: Curran Associates Inc., 2020.
  8. M. Yurochkin, M. Agarwal, S. Ghosh, K. Greenewald, N. Hoang, and Y. Khazaeni, “Bayesian nonparametric federated learning of neural networks,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 7252–7261.
  9. H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni, “Federated learning with matched averaging,” in International Conference on Learning Representations, 2020.
  10. H.-Y. Chen and W.-L. Chao, “Fedbe: Making bayesian model ensemble applicable to federated learning,” in International Conference on Learning Representations, 2021.
  11. D. J. Beutel, T. Topal, A. Mathur, X. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, H. L. Kwing, T. Parcollet, P. P. d. Gusmão, and N. D. Lane, “Flower: A friendly federated learning research framework,” arXiv preprint arXiv:2007.14390, 2020.
  12. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” CoRR, vol. abs/1708.07747, 2017.
  13. A. Krizhevsky, “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.
  14. A. Coates, A. Ng, and H. Lee, “An Analysis of Single Layer Networks in Unsupervised Feature Learning,” in AISTATS, 2011.
  15. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  16. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2015.
  17. Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” CoRR, vol. abs/2102.02079, 2021.
  18. S. Vahidian, M. Morafah, C. Chen, M. Shah, and B. Lin, “Rethinking data heterogeneity in federated learning: Introducing a new notion and standard benchmarks,” IEEE Transactions on Artificial Intelligence, pp. 1–13, 2023.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.