Papers
Topics
Authors
Recent
Search
2000 character limit reached

Federated pretraining and fine tuning of BERT using clinical notes from multiple silos

Published 20 Feb 2020 in cs.CL and cs.LG | (2002.08562v1)

Abstract: Large scale contextual representation models, such as BERT, have significantly advanced NLP in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data.

Citations (32)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.