Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An In-Depth Evaluation of Federated Learning on Biomedical Natural Language Processing (2307.11254v2)

Published 20 Jul 2023 in cs.CL

Abstract: LLMs (LMs) such as BERT and GPT have revolutionized NLP. However, the medical field faces challenges in training LMs due to limited data access and privacy constraints imposed by regulations like the Health Insurance Portability and Accountability Act (HIPPA) and the General Data Protection Regulation (GDPR). Federated learning (FL) offers a decentralized solution that enables collaborative learning while ensuring data privacy. In this study, we evaluated FL on 2 biomedical NLP tasks encompassing 8 corpora using 6 LMs. Our results show that: 1) FL models consistently outperformed models trained on individual clients' data and sometimes performed comparably with models trained with polled data; 2) with the fixed number of total data, FL models training with more clients produced inferior performance but pre-trained transformer-based models exhibited great resilience. 3) FL models significantly outperformed LLMs using zero-/one-shot learning and offered lightning inference speed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Le Peng (9 papers)
  2. Gaoxiang Luo (5 papers)
  3. jiandong chen (2 papers)
  4. Rui Zhang (1138 papers)
  5. Ziyue Xu (58 papers)
  6. Ju Sun (44 papers)
  7. Sicheng Zhou (15 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.