Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalizable and Scalable Multistage Biomedical Concept Normalization Leveraging Large Language Models (2405.15122v1)

Published 24 May 2024 in cs.CL

Abstract: Background: Biomedical entity normalization is critical to biomedical research because the richness of free-text clinical data, such as progress notes, can often be fully leveraged only after translating words and phrases into structured and coded representations suitable for analysis. LLMs, in turn, have shown great potential and high performance in a variety of NLP tasks, but their application for normalization remains understudied. Methods: We applied both proprietary and open-source LLMs in combination with several rule-based normalization systems commonly used in biomedical research. We used a two-step LLM integration approach, (1) using an LLM to generate alternative phrasings of a source utterance, and (2) to prune candidate UMLS concepts, using a variety of prompting methods. We measure results by $F_{\beta}$, where we favor recall over precision, and F1. Results: We evaluated a total of 5,523 concept terms and text contexts from a publicly available dataset of human-annotated biomedical abstracts. Incorporating GPT-3.5-turbo increased overall $F_{\beta}$ and F1 in normalization systems +9.5 and +7.3 (MetaMapLite), +13.9 and +10.9 (QuickUMLS), and +10.5 and +10.3 (BM25), while the open-source Vicuna model achieved +10.8 and +12.2 (MetaMapLite), +14.7 and +15 (QuickUMLS), and +15.6 and +18.7 (BM25). Conclusions: Existing general-purpose LLMs, both propriety and open-source, can be leveraged at scale to greatly improve normalization performance using existing tools, with no fine-tuning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. MetaMap Lite: an evaluation of a new Java implementation of MetaMap. Journal of the American Medical Informatics Association. 2017;24(4):841-4.
  2. Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. Journal of the American Medical Informatics Association. 2010;17(5):507-13.
  3. OpenAI. Introducing ChatGPT; 2022. Available from: https://openai.com/blog/chatgpt.
  4. Available from: https://lmsys.org/blog/2023-03-30-vicuna/.
  5. Llama: Open and efficient foundation language models. arXiv preprint arXiv:230213971. 2023.
  6. Mohan S, Li D. Medmentions: A large biomedical corpus annotated with umls concepts. arXiv preprint arXiv:190209476. 2019.
  7. Soldaini L, Goharian N. Quickumls: a fast, unsupervised approach for medical concept extraction. In: MedIR workshop, sigir; 2016. p. 1-4.
  8. CLAMP–a toolkit for efficiently building customized clinical natural language processing pipelines. Journal of the American Medical Informatics Association. 2018;25(3):331-6.
  9. Aronson AR, Lang FM. An overview of MetaMap: historical perspective and recent advances. Journal of the American Medical Informatics Association. 2010;17(3):229-36.
  10. Can foundation models wrangle your data? arXiv preprint arXiv:220509911. 2022.
  11. Language models are few-shot learners. arXiv preprint arXiv:200514165. 2020.
  12. Peeters R, Bizer C. Using chatgpt for entity matching. In: European Conference on Advances in Databases and Information Systems. Springer; 2023. p. 221-30.
  13. Peeters R, Bizer C. Entity matching using large language models. arXiv preprint arXiv:231011244. 2023.
  14. Hertling S, Paulheim H. OLaLa: Ontology matching with large language models. In: Proceedings of the 12th Knowledge Capture Conference 2023; 2023. p. 131-9.
  15. Arora A, Dell M. LinkTransformer: A Unified Package for Record Linkage with Transformer Language Models. arXiv preprint arXiv:230900789. 2023.
  16. MapperGPT: Large Language Models for Linking and Mapping Entities. arXiv preprint arXiv:231003666. 2023.
  17. LLMs4OL: Large language models for ontology learning. In: International Semantic Web Conference. Springer; 2023. p. 408-27.
  18. Integrating UMLS Knowledge into Large Language Models for Medical Question Answering. arXiv e-prints. 2023:arXiv-2310.
  19. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems. 2020;33:9459-74.
  20. Large Language Models Are Poor Medical Coders—Benchmarking of Medical Code Querying. NEJM AI. 2024:AIdbp2300040.
  21. A comprehensive analysis of five million UMLS metathesaurus terms using eighteen million MEDLINE citations. In: AMIA annual symposium proceedings. vol. 2010. American Medical Informatics Association; 2010. p. 907.
  22. Unified Medical Language System term occurrences in clinical notes: a large-scale corpus analysis. Journal of the American Medical Informatics Association. 2012;19(e1):e149-56.
  23. Comparing GPT-3.5 and GPT-4 Accuracy and Drift in Radiology Diagnosis Please Cases. Radiology. 2024;310(1):e232411.
  24. Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study. JMIR Medical Education. 2024;10:e50965.
  25. The performance of ChatGPT on orthopaedic in-service training exams: A comparative study of the GPT-3.5 turbo and GPT-4 models in orthopaedic education. Journal of Orthopaedics. 2024;50:70-5.
  26. Large language model-assisted information extraction from clinical reports for survival prediction of bladder cancer patients. In: Medical Imaging 2024: Computer-Aided Diagnosis. vol. 12927. SPIE; 2024. p. 449-54.
  27. Utility of ChatGPT in clinical practice. Journal of Medical Internet Research. 2023;25:e48568.
  28. Assessing the utility of ChatGPT throughout the entire clinical workflow: development and usability study. Journal of Medical Internet Research. 2023;25:e48659.
  29. Using ChatGPT to write patient clinic letters. The Lancet Digital Health. 2023;5(4):e179-81.
  30. Using AI-generated suggestions from ChatGPT to optimize clinical decision support. Journal of the American Medical Informatics Association. 2023;30(7):1237-45.
  31. Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model. Research square. 2023.
  32. ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. European radiology. 2023:1-9.
  33. ChatGPT in healthcare: a taxonomy and systematic review. Computer Methods and Programs in Biomedicine. 2024:108013.
  34. Wagner MW, Ertl-Wagner BB. Accuracy of information and references using ChatGPT-3 for retrieval of clinical radiological information. Canadian Association of Radiologists Journal. 2023:08465371231171125.
  35. Improving large language models for clinical named entity recognition via prompt engineering. Journal of the American Medical Informatics Association. 2024:ocad259.
  36. Apache lucene 4. In: SIGIR 2012 workshop on open source information retrieval. sn; 2012. p. 17.
  37. Okazaki N, Tsujii J. Simple and efficient algorithm for approximate dictionary matching. In: Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010); 2010. p. 851-9.
  38. Whissell JS, Clarke CL. Improving document clustering using Okapi BM25 feature weighting. Information retrieval. 2011;14:466-87.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Nicholas J Dobbins (8 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets