Personalized Federated Learning with Contextual Modulation and Meta-Learning (2312.15191v1)
Abstract: Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources while preserving data privacy. However, challenges such as communication bottlenecks, heterogeneity of client devices, and non-i.i.d. data distribution pose significant obstacles to achieving optimal model performance. We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities. Our approach introduces a federated modulator that learns contextual information from data batches and uses this knowledge to generate modulation parameters. These parameters dynamically adjust the activations of a base model, which operates using a MAML-based approach for model personalization. Experimental results across diverse datasets highlight the improvements in convergence speed and model performance compared to existing federated learning approaches. These findings highlight the potential of incorporating contextual information and meta-learning techniques into federated learning, paving the way for advancements in distributed machine learning paradigms.
- “Federated Learning Based on Dynamic Regularization” In International Conference on Learning Representations, 2021
- “Leaf: A benchmark for federated settings” In arXiv preprint arXiv:1812.01097, 2018
- “Federated meta-learning with fast convergence and efficient communication” In arXiv preprint arXiv:1802.07876, 2018
- “Describing textures in the wild” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 3606–3613
- “Exploiting shared representations for personalized federated learning” In International Conference on Machine Learning, 2021, pp. 2089–2099 PMLR
- Yatin Dandi, Luis Barba and Martin Jaggi “Implicit gradient alignment in distributed and federated learning” In Proceedings of the AAAI Conference on Artificial Intelligence 36.6, 2022, pp. 6454–6462
- Li Deng “The mnist database of handwritten digit images for machine learning research [best of the web]” In IEEE signal processing magazine 29.6 IEEE, 2012, pp. 141–142
- “Flexible clustered federated learning for client-level data distribution shift” In IEEE Transactions on Parallel and Distributed Systems 33.11 IEEE, 2021, pp. 2661–2674
- Alireza Fallah, Aryan Mokhtari and Asuman Ozdaglar “Personalized federated learning: A meta-learning approach” In arXiv preprint arXiv:2002.07948, 2020
- Chelsea Finn, Pieter Abbeel and Sergey Levine “Model-agnostic meta-learning for fast adaptation of deep networks” In International conference on machine learning, 2017, pp. 1126–1135 PMLR
- “An efficient framework for clustered federated learning” In Advances in Neural Information Processing Systems 33, 2020, pp. 19586–19597
- “Robust federated learning in a heterogeneous environment” In arXiv preprint arXiv:1906.06629, 2019
- “Federated learning of a mixture of global and local models” In arXiv preprint arXiv:2002.05516, 2020
- Tzu-Ming Harry Hsu, Hang Qi and Matthew Brown “Measuring the effects of non-identical data distribution for federated visual classification” In arXiv preprint arXiv:1909.06335, 2019
- “Personalized cross-silo federated learning on non-iid data” In Proceedings of the AAAI Conference on Artificial Intelligence 35.9, 2021, pp. 7865–7873
- “Improving federated learning personalization via model agnostic meta learning” In arXiv preprint arXiv:1909.12488, 2019
- “Scaffold: Stochastic controlled averaging for federated learning” In International Conference on Machine Learning, 2020, pp. 5132–5143 PMLR
- Mikhail Khodak, Maria-Florina F Balcan and Ameet S Talwalkar “Adaptive gradient-based meta-learning methods” In Advances in Neural Information Processing Systems 32, 2019
- Alex Krizhevsky “Learning multiple layers of features from tiny images”, 2009
- “One shot learning of simple visual concepts” In Proceedings of the annual meeting of the cognitive science society 33.33, 2011
- Qinbin Li, Bingsheng He and Dawn Song “Model-contrastive federated learning” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10713–10722
- “Ditto: Fair and robust federated learning through personalization” In International Conference on Machine Learning, 2021, pp. 6357–6368 PMLR
- “Fair Resource Allocation in Federated Learning” In International Conference on Learning Representations, 2020
- “Federated optimization in heterogeneous networks” In Proceedings of Machine learning and systems 2, 2020, pp. 429–450
- “Fine-grained visual classification of aircraft” In arXiv preprint arXiv:1306.5151, 2013
- “Three approaches for personalization with applications to federated learning” In arXiv preprint arXiv:2002.10619, 2020
- “Communication-efficient learning of deep networks from decentralized data” In Artificial intelligence and statistics, 2017, pp. 1273–1282 PMLR
- “Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning” In International Conference on Machine Learning, 2021, pp. 7872–7881 PMLR
- “Automated flower classification over a large number of classes” In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 2008, pp. 722–729 IEEE
- Boris Oreshkin, Pau Rodriguez Lopez and Alexandre Lacoste “Tadam: Task dependent adaptive metric for improved few-shot learning” In Advances in neural information processing systems 31, 2018
- “Film: Visual reasoning with a general conditioning layer” In Proceedings of the AAAI Conference on Artificial Intelligence 32.1, 2018
- “Optimization as a model for few-shot learning” In International conference on learning representations, 2017
- Felix Sattler, Klaus-Robert Müller and Wojciech Samek “Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints” In IEEE transactions on neural networks and learning systems 32.8 IEEE, 2020, pp. 3710–3722
- “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition” In Neural networks 32 Elsevier, 2012, pp. 323–332
- Canh T Dinh, Nguyen Tran and Josh Nguyen “Personalized federated learning with moreau envelopes” In Advances in Neural Information Processing Systems 33, 2020, pp. 21394–21405
- “FedCor: Correlation-based active client selection strategy for heterogeneous federated learning” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10102–10111
- “Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples” In International Conference on Learning Representations, 2020
- “Advances and Challenges in Meta-Learning: A Technical Review” In arXiv preprint arXiv:2307.04722, 2023
- “The caltech-ucsd birds-200-2011 dataset”, 2011
- “Federated Learning with Matched Averaging” In International Conference on Learning Representations, 2020
- “Node selection toward faster convergence for federated learning on non-iid data” In IEEE Transactions on Network Science and Engineering 9.5 IEEE, 2022, pp. 3099–3111
- Jian Xu, Xinyi Tong and Huang Shao-Lun “Personalized Federated Learning with Feature Alignment and Classifier Collaboration” In International conference on learning representations, 2023
- “Personalized federated learning on non-IID data via group-based meta-learning” In ACM Transactions on Knowledge Discovery from Data 17.4 ACM New York, NY, 2023, pp. 1–20
- “FedMix: Approximation of Mixup under Mean Augmented Federated Learning” In International Conference on Learning Representations
- “Bayesian nonparametric federated learning of neural networks” In International conference on machine learning, 2019, pp. 7252–7261 PMLR
- “Personalized Federated Learning with First Order Model Optimization” In International Conference on Learning Representations, 2021
- “Federated learning with non-iid data” In arXiv preprint arXiv:1806.00582, 2018
- Anna Vettoruzzo (4 papers)
- Mohamed-Rafik Bouguelia (5 papers)
- Thorsteinn Rögnvaldsson (16 papers)