Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Learning on Multimodal Analysis of Electronic Health Records (2403.14926v1)

Published 22 Mar 2024 in stat.ML and cs.LG

Abstract: Electronic health record (EHR) systems contain a wealth of multimodal clinical data including structured data like clinical codes and unstructured data such as clinical notes. However, many existing EHR-focused studies has traditionally either concentrated on an individual modality or merged different modalities in a rather rudimentary fashion. This approach often results in the perception of structured and unstructured data as separate entities, neglecting the inherent synergy between them. Specifically, the two important modalities contain clinically relevant, inextricably linked and complementary health information. A more complete picture of a patient's medical history is captured by the joint analysis of the two modalities of data. Despite the great success of multimodal contrastive learning on vision-language, its potential remains under-explored in the realm of multimodal EHR, particularly in terms of its theoretical understanding. To accommodate the statistical analysis of multimodal EHR data, in this paper, we propose a novel multimodal feature embedding generative model and design a multimodal contrastive loss to obtain the multimodal EHR feature representation. Our theoretical analysis demonstrates the effectiveness of multimodal learning compared to single-modality learning and connects the solution of the loss function to the singular value decomposition of a pointwise mutual information matrix. This connection paves the way for a privacy-preserving algorithm tailored for multimodal EHR feature representation learning. Simulation studies show that the proposed algorithm performs well under a variety of configurations. We further validate the clinical utility of the proposed algorithm in real-world EHR data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
  2. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.
  3. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics, 4:385–399.
  4. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6:483–495.
  5. Improving clinical outcome predictions using convolution over medical entities with multimodal learning. Artificial Intelligence in Medicine, 117:102112.
  6. Clinical concept embeddings learned from massive sources of multimodal medical data. In PACIFIC SYMPOSIUM ON BIOCOMPUTING 2020, pages 295–306. World Scientific.
  7. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772.
  8. Multi-layer representation learning for medical concepts. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1495–1504.
  9. Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association, 24(2):361–370.
  10. Learning low-dimensional representations of medical concepts. AMIA Summits on Translational Science Proceedings, 2016:41–50.
  11. Medical semantic similarity with a neural language model. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, pages 1819–1822.
  12. Implicit chain of thought reasoning via knowledge distillation. arXiv preprint arXiv:2311.01460.
  13. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 4171–4186.
  14. Arch: Large-scale knowledge graph via aggregated narrative codified health records analysis. medRxiv.
  15. Electronic medical record phenotyping using the anchor and learn framework. Journal of the American Medical Informatics Association, 23(4):731–740.
  16. Clinical knowledge extraction via sparse embedding regression (KESER) with multi-center large scale electronic health record data. NPJ digital medicine, 4(1):1–11.
  17. Clinical XLNet: Modeling sequential clinical notes and predicting prolonged mechanical ventilation. In Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 94–100, Online. Association for Computational Linguistics.
  18. What makes multi-modal learning better than single (provably). Advances in Neural Information Processing Systems, 34:10944–10956.
  19. The power of contrast for feature learning: A theoretical analysis. arXiv preprint arXiv:2110.02473.
  20. MIMIC-III, a freely accessible critical care database. Scientific Data, 3(1):1–9.
  21. Code2vec: Embedding and clustering medical diagnosis data. In 2017 IEEE International Conference on Healthcare Informatics (ICHI), pages 386–390.
  22. Using clinical notes with time series data for ICU management. In Inui, K., Jiang, J., Ng, V., and Wan, X., editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6432–6437, Hong Kong, China. Association for Computational Linguistics.
  23. Clinical-t5: Large language models built using mimic clinical text. PhysioNet.
  24. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, volume 27.
  25. Multi-modal contrastive learning for healthcare data analytics. In 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI), pages 120–127. IEEE.
  26. RxNorm: prescription for electronic drug information exchange. IT professional, 7(5):17–23.
  27. Multimodal data matters: language model pre-training over structured and unstructured electronic health records. IEEE Journal of Biomedical and Health Informatics, 27(1):504–514.
  28. Knowledge graph embedding with electronic health records data via latent graphical block model. arXiv preprint arXiv:2305.19997.
  29. Using UMLS Concept Unique Identifiers (CUIs) for word sense disambiguation in the biomedical domain. In AMIA Annual Symposium Proceedings, volume 2007, pages 533–537. American Medical Informatics Association.
  30. Efficient estimation of word representations in vector space. Proceedings of Workshop at ICLR, 2013.
  31. Understanding multimodal contrastive learning and incorporating unpaired data. In International Conference on Artificial Intelligence and Statistics, pages 4348–4380. PMLR.
  32. OpenAI (2023). ChatGPT: Optimizing language models for dialogue. URL: https://openai. com/blog/chatgpt.
  33. Mnn: multimodal attentional neural networks for diagnosis prediction. Extraction, 1(2019):A1.
  34. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR.
  35. Data integration of structured and unstructured sources for assigning clinical codes to patient stays. Journal of the American Medical Informatics Association, 23(e1):e11–e19.
  36. Natural language processing of clinical notes on chronic diseases: systematic review. JMIR medical informatics, 7(2):e12239.
  37. Advancing the science for active surveillance: rationale and design for the observational medical outcomes partnership. Annals of Internal Medicine, 153(9):600–606.
  38. Systemic inflammatory reaction after pneumococcal vaccine: a case series. Human Vaccines & Immunotherapeutics, 10(6):1767–1770.
  39. Hierarchical pretraining on multimodal electronic health records. arXiv preprint arXiv:2310.07871.
  40. Codes clinical correlation test with inference on pmi matrix. Preprint.
  41. A decision support system in precision medicine: contrastive multimodal learning for patient stratification. Annals of Operations Research, pages 1–29.
  42. Heteroskedastic pca: Algorithm, optimality, and applications. The Annals of Statistics, 50(1):53–80.
  43. BERT-XML: Large scale automated ICD coding using BERT pretraining. In Rumshisky, A., Roberts, K., Bethard, S., and Naumann, T., editors, Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 24–34, Online. Association for Computational Linguistics.
  44. Multi-source learning via completion of block-wise overlapping noisy matrices. arXiv preprint arXiv:2105.10360.
  45. Multiview incomplete knowledge graph integration with application to cross-institutional ehr data harmonization. Journal of Biomedical Informatics, 133:104147.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianxi Cai (74 papers)
  2. Feiqing Huang (6 papers)
  3. Ryumei Nakada (12 papers)
  4. Linjun Zhang (70 papers)
  5. Doudou Zhou (21 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com