HyperFedNet: Communication-Efficient Personalized Federated Learning Via Hypernetwork
Abstract: In response to the challenges posed by non-independent and identically distributed (non-IID) data and the escalating threat of privacy attacks in Federated Learning (FL), we introduce HyperFedNet (HFN), a novel architecture that incorporates hypernetworks to revolutionize parameter aggregation and transmission in FL. Traditional FL approaches, characterized by the transmission of extensive parameters, not only incur significant communication overhead but also present vulnerabilities to privacy breaches through gradient analysis. HFN addresses these issues by transmitting a concise set of hypernetwork parameters, thereby reducing communication costs and enhancing privacy protection. Upon deployment, the HFN algorithm enables the dynamic generation of parameters for the basic layer of the FL main network, utilizing local database features quantified by embedding vectors as input. Through extensive experimentation, HFN demonstrates superior performance in reducing communication overhead and improving model accuracy compared to conventional FL methods. By integrating the HFN algorithm into the FL framework, HFN offers a solution to the challenges of non-IID data and privacy threats.
- Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
- Hypernetworks, 2016.
- Personalized federated learning using hypernetworks. In International Conference on Machine Learning, pages 9489–9502. PMLR, 2021.
- Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Transactions on Parallel and Distributed Systems, 32(1):59–71, 2020.
- Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.
- Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479, 2018.
- Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429–450, 2020.
- Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning, pages 6357–6368. PMLR, 2021.
- Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263, 2021.
- Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning, pages 5132–5143. PMLR, 2020.
- Federated learning with label distribution skew via logits calibration. In International Conference on Machine Learning, pages 26311–26329. PMLR, 2022.
- Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019.
- Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019.
- Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. Advances in Neural Information Processing Systems, 33:3557–3568, 2020.
- Fedbabu: Towards enhanced representation for federated image classification. arXiv preprint arXiv:2106.06042, 2021.
- Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3):50–60, 2020.
- Model compression for communication efficient federated learning. IEEE Transactions on Neural Networks and Learning Systems, 2021.
- Federated learning with compression: Unified analysis and sharp guarantees. In International Conference on Artificial Intelligence and Statistics, pages 2350–2358. PMLR, 2021.
- Sebastian U Stich. On communication compression for distributed optimization on heterogeneous data. arXiv preprint arXiv:2009.02388, 2020.
- Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210, 2018.
- Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In International Conference on Artificial Intelligence and Statistics, pages 2021–2031. PMLR, 2020.
- Hierarchical federated learning with local model embedding. Engineering Applications of Artificial Intelligence, 123:106148, 2023.
- Share: Shaping data distribution at edge for communication-efficient hierarchical federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), pages 24–34. IEEE, 2021.
- Group knowledge transfer: Federated learning of large cnns at the edge. Advances in Neural Information Processing Systems, 33:14068–14080, 2020.
- Jürgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131–139, 1992.
- Learning feed-forward one-shot learners. Advances in neural information processing systems, 29, 2016.
- Decoupled neural interfaces using synthetic gradients. In International conference on machine learning, pages 1627–1635. PMLR, 2017.
- Predicting parameters in deep learning. Advances in neural information processing systems, 26, 2013.
- Deep fried convnets. In Proceedings of the IEEE international conference on computer vision, pages 1476–1483, 2015.
- Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344, 2017.
- Graph hypernetworks for neural architecture search. arXiv preprint arXiv:1810.05749, 2018.
- Few-shot image recognition by predicting parameters from activations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3667–3676, 2020.
- Hyperstyle: Stylegan inversion with hypernetworks for real image editing. In Proceedings of the IEEE/CVF conference on computer Vision and pattern recognition, pages 18511–18521, 2022.
- Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695, 2019.
- Layer-wised model aggregation for personalized federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10092–10101, 2022.
- Learning the pareto front with hypernetworks. arXiv preprint arXiv:2010.04104, 2020.
- Federated learning with heterogeneous architectures using graph hypernetworks. arXiv preprint arXiv:2201.08459, 2022.
- Every filter extracts a specific texture in convolutional neural networks. arXiv preprint arXiv:1608.04170, 2016.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
- Learning multiple layers of features from tiny images. 2009.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Data-free knowledge distillation for heterogeneous federated learning. In International conference on machine learning, pages 12878–12889. PMLR, 2021.
- pfedsim: Similarity-aware model aggregation towards personalized federated learning. arXiv preprint arXiv:2305.15706, 2023.
- Fedbn: Federated learning on non-iid features via local batch normalization. arXiv preprint arXiv:2102.07623, 2021.
- Exploiting shared representations for personalized federated learning. In International conference on machine learning, pages 2089–2099. PMLR, 2021.
- Personalized federated learning with first order model optimization. arXiv preprint arXiv:2012.08565, 2020.
- See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16337–16346, 2021.
- Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
- idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.