Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks (2212.08354v1)

Published 16 Dec 2022 in cs.CL

Abstract: Massively multi-task learning with LLMs has recently made substantial progress on few-shot generalization. However, this is usually performed in a centralized learning fashion, ignoring the privacy sensitivity issue of (annotated) data used in multiple tasks. To mitigate this issue, we propose FewFedWeight, a few-shot federated learning framework across multiple tasks, to achieve the best of both worlds: privacy preservation and cross-task generalization. FewFedWeight trains client models in isolated devices without sharing data. It broadcasts the global model in the server to each client and produces pseudo data for clients so that knowledge from the global model can be explored to enhance few-shot learning of each client model. An energy-based algorithm is further proposed to weight pseudo samples in order to reduce the negative impact of noise from the generated pseudo data. Adaptive model weights of client models are also tuned according to their performance. We use these model weights to dynamically aggregate client models to update the global model. Experiments on 118 NLP tasks show that FewFedWeight can significantly improve the performance of client models on 61% tasks with an average performance improvement rate of 30.5% over the baseline and substantially outperform FedAvg and other decentralized learning methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Weilong Dong (9 papers)
  2. Xinwei Wu (10 papers)
  3. Junzhuo Li (10 papers)
  4. Shuangzhi Wu (29 papers)
  5. Chao Bian (21 papers)
  6. Deyi Xiong (104 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.