Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated learning model for predicting major postoperative complications (2404.06641v1)

Published 9 Apr 2024 in cs.LG, cs.AI, and cs.CY

Abstract: Background: The accurate prediction of postoperative complication risk using Electronic Health Records (EHR) and artificial intelligence shows great potential. Training a robust artificial intelligence model typically requires large-scale and diverse datasets. In reality, collecting medical data often encounters challenges surrounding privacy protection. Methods: This retrospective cohort study includes adult patients who were admitted to UFH Gainesville (GNV) (n = 79,850) and Jacksonville (JAX) (n = 28,636) for any type of inpatient surgical procedure. Using perioperative and intraoperative features, we developed federated learning models to predict nine major postoperative complications (i.e., prolonged intensive care unit stay and mechanical ventilation). We compared federated learning models with local learning models trained on a single site and central learning models trained on pooled dataset from two centers. Results: Our federated learning models achieved the area under the receiver operating characteristics curve (AUROC) values ranged from 0.81 for wound complications to 0.92 for prolonged ICU stay at UFH GNV center. At UFH JAX center, these values ranged from 0.73-0.74 for wound complications to 0.92-0.93 for hospital mortality. Federated learning models achieved comparable AUROC performance to central learning models, except for prolonged ICU stay, where the performance of federated learning models was slightly higher than central learning models at UFH GNV center, but slightly lower at UFH JAX center. In addition, our federated learning model obtained comparable performance to the best local learning model at each center, demonstrating strong generalizability. Conclusion: Federated learning is shown to be a useful tool to train robust and generalizable models from large scale data across multiple institutions where data protection barriers are high.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yonggi Park (6 papers)
  2. Yuanfang Ren (24 papers)
  3. Benjamin Shickel (24 papers)
  4. Ziyuan Guan (20 papers)
  5. Ayush Patela (1 paper)
  6. Yingbo Ma (20 papers)
  7. Zhenhong Hu (9 papers)
  8. Tyler J. Loftus (15 papers)
  9. Parisa Rashidi (59 papers)
  10. Tezcan Ozrazgat-Baslanti (32 papers)
  11. Azra Bihorac (51 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.