Enhancing Clinical Efficiency through LLM: Discharge Note Generation for Cardiac Patients (2404.05144v1)
Abstract: Medical documentation, including discharge notes, is crucial for ensuring patient care quality, continuity, and effective medical communication. However, the manual creation of these documents is not only time-consuming but also prone to inconsistencies and potential errors. The automation of this documentation process using AI represents a promising area of innovation in healthcare. This study directly addresses the inefficiencies and inaccuracies in creating discharge notes manually, particularly for cardiac patients, by employing AI techniques, specifically LLM. Utilizing a substantial dataset from a cardiology center, encompassing wide-ranging medical records and physician assessments, our research evaluates the capability of LLM to enhance the documentation process. Among the various models assessed, Mistral-7B distinguished itself by accurately generating discharge notes that significantly improve both documentation efficiency and the continuity of care for patients. These notes underwent rigorous qualitative evaluation by medical expert, receiving high marks for their clinical relevance, completeness, readability, and contribution to informed decision-making and care planning. Coupled with quantitative analyses, these results confirm Mistral-7B's efficacy in distilling complex medical information into concise, coherent summaries. Overall, our findings illuminate the considerable promise of specialized LLM, such as Mistral-7B, in refining healthcare documentation workflows and advancing patient care. This study lays the groundwork for further integrating advanced AI technologies in healthcare, demonstrating their potential to revolutionize patient documentation and support better care outcomes.
- Meditron-70b: Scaling medical pretraining for large language models. arXiv preprint arXiv:2311.16079, 2023.
- Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
- Daniel Han. unsloth. https://github.com/unslothai/unsloth, 2023. Accessed: 2024-03-05.
- Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
- Solar 10.7 b: Scaling large language models with simple yet effective depth up-scaling. arXiv preprint arXiv:2312.15166, 2023.
- Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373, 2024.
- Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft, 2022.
- Lessons learned from development of de-identification system for biomedical research in a korean tertiary hospital. Healthcare Informatics Research, 19(2):102–109, 2013.
- Towards clinical encounter summarization: Learning to compose discharge summaries from prior notes. arXiv preprint arXiv:2104.13498, 2021.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4):255–263, 2023a.
- Radiology report generation with a learned knowledge base and multi-modal alignment. Medical Image Analysis, 86:102798, 2023b.
- Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385, 2024.
- Yunha Kim (2 papers)
- Heejung Choi (2 papers)
- Hyeram Seo (2 papers)
- Minkyoung Kim (7 papers)
- JiYe Han (2 papers)
- Gaeun Kee (2 papers)
- Seohyun Park (6 papers)
- Soyoung Ko (2 papers)
- Byeolhee Kim (3 papers)
- Suyeon Kim (5 papers)
- Tae Joon Jun (19 papers)
- Young-Hak Kim (14 papers)
- Hyoje Jung (3 papers)