MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models (2407.12309v1)
Abstract: Electronic health records (EHRs) are multimodal by nature, consisting of structured tabular features like lab tests and unstructured clinical notes. In real-life clinical practice, doctors use complementary multimodal EHR data sources to get a clearer picture of patients' health and support clinical decision-making. However, most EHR predictive models do not reflect these procedures, as they either focus on a single modality or overlook the inter-modality interactions/redundancy. In this work, we propose MEDFuse, a Multimodal EHR Data Fusion framework that incorporates masked lab-test modeling and LLMs to effectively integrate structured and unstructured medical data. MEDFuse leverages multimodal embeddings extracted from two sources: LLMs fine-tuned on free clinical text and masked tabular transformers trained on structured lab test results. We design a disentangled transformer module, optimized by a mutual information loss to 1) decouple modality-specific and modality-shared information and 2) extract useful joint representation from the noise and redundancy present in clinical notes. Through comprehensive validation on the public MIMIC-III dataset and the in-house FEMH dataset, MEDFuse demonstrates great potential in advancing clinical predictions, achieving over 90% F1 score in the 10-disease multi-label classification task.
- Thao Minh Nguyen Phan (1 paper)
- Cong-Tinh Dao (4 papers)
- Chenwei Wu (23 papers)
- Jian-Zhe Wang (2 papers)
- Shun Liu (9 papers)
- Jun-En Ding (14 papers)
- David Restrepo (11 papers)
- Feng Liu (1212 papers)
- Fang-Ming Hung (5 papers)
- Wen-Chih Peng (47 papers)