Utilizing Free Clients in Federated Learning for Focused Model Enhancement (2310.04515v1)
Abstract: Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data, without the need for clients to share their data. Many existing FL approaches assume that all clients have equal importance and construct a global objective based on all clients. We consider a version of FL we call Prioritized FL, where the goal is to learn a weighted mean objective of a subset of clients, designated as priority clients. An important question arises: How do we choose and incentivize well aligned non priority clients to participate in the federation, while discarding misaligned clients? We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge. The algorithm employs a matching strategy that chooses non priority clients based on how similar the models loss is on their data compared to the global data, thereby ensuring the use of non priority client gradients only when it is beneficial for priority clients. This approach ensures mutual benefits as non priority clients are motivated to join when the model performs satisfactorily on their data, and priority clients can utilize their updates and computational resources when their goals align. We present a convergence analysis that quantifies the trade off between client selection and speed of convergence. Our algorithm shows faster convergence and higher test accuracy than baselines for various synthetic and benchmark datasets.
- Communication-Efficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/mcmahan17a.html.
- Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1-2):1–210, June 2021. ISSN 1935-8237. doi: 10.1561/2200000083. Publisher Copyright: © 2021 Georg Thieme Verlag. All rights reserved.
- Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 54, 2017.
- On the convergence of fedavg on non-iid data, 2020.
- Client Selection in Federated Learning: Principles, Challenges, and Opportunities. 4:1–8, 2022. URL http://arxiv.org/abs/2211.01549.
- Scaffold: Stochastic controlled averaging for federated learning, 2021.
- Federated Optimization in Heterogeneous Networks. 2018. URL http://arxiv.org/abs/1812.06127.
- Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies. 2020. URL http://arxiv.org/abs/2010.01243.
- Optimal Client Sampling for Federated Learning. (1):1–32, 2020. URL http://arxiv.org/abs/2010.13723.
- Eiffel: Efficient and fair scheduling in adaptive federated learning. IEEE Transactions on Parallel & Distributed Systems, 33(12):4282–4294, Dec 2022. ISSN 1558-2183. doi: 10.1109/TPDS.2022.3187365.
- Fast-convergent federated learning. IEEE Journal on Selected Areas in Communications, 39(1):201–218, 2021. doi: 10.1109/JSAC.2020.3036952.
- Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in Neural Information Processing Systems, 2020-Decem:1–34, 2020a. ISSN 10495258.
- Federated learning for mobile keyboard prediction, 2019.
- Client selection for federated learning with heterogeneous resources in mobile edge. In ICC 2019 - 2019 IEEE International Conference on Communications (ICC), pages 1–7, 2019. doi: 10.1109/ICC.2019.8761315.
- Anonymous. Rethinking Client Reweighting for Selfish Federated Learning. Submitted to ICLR 2022, 2022.
- Linear Speedup in Personalized Collaborative Learning. 2021. URL http://arxiv.org/abs/2111.05968.
- Optimizing federated learning on non-iid data with reinforcement learning. In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, pages 1698–1707, 2020b. doi: 10.1109/INFOCOM41043.2020.9155494.
- Federated learning with class imbalance reduction. In 2021 29th European Signal Processing Conference (EUSIPCO), pages 2174–2178, 2021. doi: 10.23919/EUSIPCO54536.2021.9616052.
- Tifl: A tier-based federated learning system, 2020.
- Federated multi-task learning, 2018.
- Advances and open problems in federated learning. 2019. ISBN 9781680837704. URL https://arxiv.org/abs/1912.04977.
- Federated asymptotics: a model to compare federated learning algorithms, 2021.
- Clustered federated learning: Model-agnostic distributed multi-task optimization under privacy constraints, 2019.
- Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems, pages 1–17, 2022. doi: 10.1109/tnnls.2022.3160699. URL https://doi.org/10.1109%2Ftnnls.2022.3160699.
- Adaptive federated optimization. CoRR, abs/2003.00295, 2020. URL https://arxiv.org/abs/2003.00295.
- On the convergence of local descent methods in federated learning, 2019.
- Better communication complexity for local SGD. CoRR, abs/1909.04746, 2019. URL http://arxiv.org/abs/1909.04746.
- The error-feedback framework: Better rates for sgd with delayed gradients and compressed communication, 2021.
- Is local sgd better than minibatch sgd?, 2020.
- A unified theory of decentralized sgd with changing topology and local updates, 2021.
- FedPD: A federated learning framework with adaptivity to non-IID data. IEEE Transactions on Signal Processing, 69:6055–6070, 2021. doi: 10.1109/tsp.2021.3115952. URL https://doi.org/10.1109%2Ftsp.2021.3115952.
- Fedsplit: An algorithmic framework for fast federated optimization, 2020.
- Mercury: Efficient on-device distributed dnn training via stochastic importance sampling. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, SenSys ’21, page 29–41, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450390972. doi: 10.1145/3485730.3485930. URL https://doi.org/10.1145/3485730.3485930.
- Learning and data selection in big datasets. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2191–2200. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/ghadikolaei19a.html.
- Comunication-efficient algorithms for statistical optimization, 2013.
- Sebastian U. Stich. Local sgd converges fast and communicates little, 2019.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747.
- Aditya Narayan Ravi (6 papers)
- Ilan Shomorony (41 papers)