Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks (2104.08815v3)

Published 18 Apr 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for NLP tasks. Federated learning (FL) provides promising approaches for a large number of clients (e.g., personal devices or organizations) to collaboratively learn a shared global model to benefit all clients while allowing users to keep their data locally. Despite interest in studying FL methods for NLP tasks, a systematic comparison and analysis is lacking in the literature. Herein, we present the FedNLP, a benchmarking framework for evaluating federated learning methods on four different task formulations: text classification, sequence tagging, question answering, and seq2seq. We propose a universal interface between Transformer-based LLMs (e.g., BERT, BART) and FL methods (e.g., FedAvg, FedOPT, etc.) under various non-IID partitioning strategies. Our extensive experiments with FedNLP provide empirical comparisons between FL methods and helps us better understand the inherent challenges of this direction. The comprehensive analysis points to intriguing and exciting future research aimed at developing FL methods for NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Bill Yuchen Lin (72 papers)
  2. Chaoyang He (46 papers)
  3. Zihang Zeng (2 papers)
  4. Hulin Wang (5 papers)
  5. Yufen Huang (1 paper)
  6. Christophe Dupuy (15 papers)
  7. Rahul Gupta (146 papers)
  8. Mahdi Soltanolkotabi (79 papers)
  9. Xiang Ren (194 papers)
  10. Salman Avestimehr (116 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.