Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events (2307.06439v1)

Published 12 Jul 2023 in cs.CL and cs.AI

Abstract: LLMs, such as GPT-4, have demonstrated remarkable capabilities across a wide range of tasks, including health applications. In this paper, we study how LLMs can be used to scale biomedical knowledge curation. We find that while LLMs already possess decent competency in structuring biomedical text, by distillation into a task-specific student model through self-supervised learning, substantial gains can be attained over out-of-box LLMs, with additional advantages such as cost, efficiency, and white-box model access. We conduct a case study on adverse drug event (ADE) extraction, which is an important area for improving care. On standard ADE extraction evaluation, a GPT-3.5 distilled PubMedBERT model attained comparable accuracy as supervised state-of-the-art models without using any labeled data. Despite being over 1,000 times smaller, the distilled model outperformed its teacher GPT-3.5 by over 6 absolute points in F1 and GPT-4 by over 5 absolute points. Ablation studies on distillation model choice (e.g., PubMedBERT vs BioGPT) and ADE extraction architecture shed light on best practice for biomedical knowledge extraction. Similar gains were attained by distillation for other standard biomedical knowledge extraction tasks such as gene-disease associations and protected health information, further illustrating the promise of this approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yu Gu (218 papers)
  2. Sheng Zhang (212 papers)
  3. Naoto Usuyama (22 papers)
  4. Yonas Woldesenbet (1 paper)
  5. Cliff Wong (14 papers)
  6. Praneeth Sanapathi (1 paper)
  7. Mu Wei (11 papers)
  8. Naveen Valluri (3 papers)
  9. Erika Strandberg (1 paper)
  10. Tristan Naumann (41 papers)
  11. Hoifung Poon (61 papers)
Citations (14)