Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing (2007.09236v3)

Published 17 Jul 2020 in cs.LG and stat.ML

Abstract: Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to $5\times$ when using existing FL optimisation strategies, and with a further $3\times$ improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jed Mills (4 papers)
  2. Jia Hu (41 papers)
  3. Geyong Min (35 papers)
Citations (169)

Summary

Multi-Task Federated Learning for Personalized Deep Neural Networks in Edge Computing

The paper proposes a novel approach to Federated Learning (FL) that addresses several challenges identified in standard FL methods, particularly regarding convergence speed and personalized model accuracy. The method, termed Multi-Task Federated Learning (MTFL), integrates non-federated Batch Normalization (BN) layers into deep neural networks deployed in edge computing environments. This strategy enables clients to train models that are personalized to their own data without compromising convergence speed, aligning FL with scenarios where local accuracy supersedes global model accuracy, such as user content-recommendation systems.

Key Findings and Numerical Results

The empirical findings underscore the efficacy of MTFL when applied using datasets such as MNIST and CIFAR10. It is noted that MTFL can reduce the rounds required to achieve target User Accuracy (UA) by up to 5×5\times compared to existing FL optimization strategies. Furthermore, when employing a distributed form of Adam optimization (FedAvg-Adam), additional improvements in convergence speed are realized, achieving a further 3×3\times reduction in the communication rounds needed. This suggests significant improvements in both convergence efficiency and personalization capabilities when MTFL is utilized with non-federated BN layers and advanced optimization strategies.

Convergence Enhancements through FedAvg-Adam

FedAvg-Adam introduces adaptive optimization into the federated learning paradigm, highlighting a substantial improvement in convergence speed. Unlike FedAvg, which employs stochastic gradient descent with averaging, FedAvg-Adam adapts the conventional FedAvg approach by incorporating the Adam optimizer at both the client and global model levels, thereby allowing for faster convergence and enhanced UA. This makes FedAvg-Adam particularly suited for the MTFL framework, where personalized BN layers further increase the algorithm's computational efficiency and applicability to real-world scenarios involving non-IID data.

Implications, Challenges, and Future Perspectives

The paper highlights several implications of the findings. The integration of personalized BN layers not only enhances privacy—by preventing sharing of complete model parameters—but also facilitates a tailored approach to handling non-IID distributions in FL. Furthermore, the results suggest that optimizing local model accuracy by utilizing such personalized layers can be critical for applications demanding high individual client model performance.

The practical implications extend to areas like healthcare data analysis and mobile content recommendation systems, where data privacy and user-specific model performance are paramount. The research may pave the way for new explorations into decentralized learning systems and adaptive optimization strategies, potentially expanding the scope and impact of Federated Learning within edge computing environments.

Future Developments in Multi-Task Learning within FL

Looking forward, MTFL presents an intriguing avenue for further paper, particularly considering optimal configurations of private patch layers, their impact on information propagation in DNNs, and balancing client-specific customization with global model utility. Extensions to other personalization strategies and adaptive optimization techniques could further enhance the robustness and scalability of federated learning systems in varied deployment contexts, including peer-to-peer learning frameworks.

The recognition of User model Accuracy as a pivotal metric underlines the growing need to develop robust personalization strategies in distributed learning environments, offering immense potential for future research endeavors aimed at enhancing client-centric machine learning models.