Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Beginner to Expert: Modeling Medical Knowledge into General LLMs (2312.01040v3)

Published 2 Dec 2023 in cs.CL

Abstract: Recently, LLM based AI systems have demonstrated remarkable capabilities in natural language understanding and generation. However, these models face a significant challenge when it comes to sensitive applications, such as reasoning over medical knowledge and answering medical questions in a physician-like manner. Prior studies attempted to overcome this challenge by increasing the model size (>100B) to learn more general medical knowledge, while there is still room for improvement in LLMs with smaller-scale model sizes (<100B). In this work, we start from a pre-trained general LLM model (AntGLM-10B) and fine-tune it from a medical beginner towards a medical expert (called AntGLM-Med-10B), which leverages a 3-stage optimization procedure, i.e., general medical knowledge injection, medical domain instruction tuning, and specific medical task adaptation. Our contributions are threefold: (1) We specifically investigate how to adapt a pre-trained general LLM in medical domain, especially for a specific medical task. (2) We collect and construct large-scale medical datasets for each stage of the optimization process. These datasets encompass various data types and tasks, such as question-answering, medical reasoning, multi-choice questions, and medical conversations. (3) Specifically for multi-choice questions in the medical domain, we propose a novel Verification-of-Choice approach for prompting engineering, which significantly enhances the reasoning ability of LLMs. Remarkably, by combining the above approaches, our AntGLM-Med-10B model can outperform the most of LLMs on PubMedQA, including both general and medical LLMs, even when these LLMs have larger model size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Qiang Li (449 papers)
  2. Xiaoyan Yang (50 papers)
  3. Haowen Wang (25 papers)
  4. Qin Wang (142 papers)
  5. Lei Liu (332 papers)
  6. Junjie Wang (164 papers)
  7. Yang Zhang (1129 papers)
  8. Mingyuan Chu (2 papers)
  9. Sen Hu (32 papers)
  10. Yicheng Chen (24 papers)
  11. Yue Shen (243 papers)
  12. Cong Fan (6 papers)
  13. Wangshu Zhang (3 papers)
  14. Teng Xu (21 papers)
  15. Jinjie Gu (50 papers)
  16. Jing Zheng (12 papers)
  17. Guannan Zhang Ant Group (1 paper)
Citations (9)