Papers
Topics
Authors
Recent
2000 character limit reached

Federated pretraining and fine tuning of BERT using clinical notes from multiple silos (2002.08562v1)

Published 20 Feb 2020 in cs.CL and cs.LG

Abstract: Large scale contextual representation models, such as BERT, have significantly advanced NLP in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data.

Citations (32)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.