Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Clinical Efficiency through LLM: Discharge Note Generation for Cardiac Patients (2404.05144v1)

Published 8 Apr 2024 in cs.CL, cs.CV, and cs.LG

Abstract: Medical documentation, including discharge notes, is crucial for ensuring patient care quality, continuity, and effective medical communication. However, the manual creation of these documents is not only time-consuming but also prone to inconsistencies and potential errors. The automation of this documentation process using AI represents a promising area of innovation in healthcare. This study directly addresses the inefficiencies and inaccuracies in creating discharge notes manually, particularly for cardiac patients, by employing AI techniques, specifically LLM. Utilizing a substantial dataset from a cardiology center, encompassing wide-ranging medical records and physician assessments, our research evaluates the capability of LLM to enhance the documentation process. Among the various models assessed, Mistral-7B distinguished itself by accurately generating discharge notes that significantly improve both documentation efficiency and the continuity of care for patients. These notes underwent rigorous qualitative evaluation by medical expert, receiving high marks for their clinical relevance, completeness, readability, and contribution to informed decision-making and care planning. Coupled with quantitative analyses, these results confirm Mistral-7B's efficacy in distilling complex medical information into concise, coherent summaries. Overall, our findings illuminate the considerable promise of specialized LLM, such as Mistral-7B, in refining healthcare documentation workflows and advancing patient care. This study lays the groundwork for further integrating advanced AI technologies in healthcare, demonstrating their potential to revolutionize patient documentation and support better care outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Meditron-70b: Scaling medical pretraining for large language models. arXiv preprint arXiv:2311.16079, 2023.
  2. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
  3. Daniel Han. unsloth. https://github.com/unslothai/unsloth, 2023. Accessed: 2024-03-05.
  4. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  5. Solar 10.7 b: Scaling large language models with simple yet effective depth up-scaling. arXiv preprint arXiv:2312.15166, 2023.
  6. Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373, 2024.
  7. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft, 2022.
  8. Lessons learned from development of de-identification system for biomedical research in a korean tertiary hospital. Healthcare Informatics Research, 19(2):102–109, 2013.
  9. Towards clinical encounter summarization: Learning to compose discharge summaries from prior notes. arXiv preprint arXiv:2104.13498, 2021.
  10. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  11. Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4):255–263, 2023a.
  12. Radiology report generation with a learned knowledge base and multi-modal alignment. Medical Image Analysis, 86:102798, 2023b.
  13. Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yunha Kim (2 papers)
  2. Heejung Choi (2 papers)
  3. Hyeram Seo (2 papers)
  4. Minkyoung Kim (7 papers)
  5. JiYe Han (2 papers)
  6. Gaeun Kee (2 papers)
  7. Seohyun Park (6 papers)
  8. Soyoung Ko (2 papers)
  9. Byeolhee Kim (3 papers)
  10. Suyeon Kim (5 papers)
  11. Tae Joon Jun (19 papers)
  12. Young-Hak Kim (14 papers)
  13. Hyoje Jung (3 papers)
Citations (9)