Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-Tuning Medical Language Models for Enhanced Long-Contextual Understanding and Domain Expertise (2407.11536v1)

Published 16 Jul 2024 in cs.CL and cs.AI

Abstract: LLMs have been widely applied in various professional fields. By fine-tuning the models using domain specific question and answer datasets, the professional domain knowledge and Q&A abilities of these models have significantly improved, for example, medical professional LLMs that use fine-tuning of doctor-patient Q&A data exhibit extraordinary disease diagnostic abilities. However, we observed that despite improvements in specific domain knowledge, the performance of medical LLM in long-context understanding has significantly declined, especially compared to general LLMs with similar parameters. The purpose of this study is to investigate the phenomenon of reduced performance in understanding long-context in medical LLM. We designed a series of experiments to conduct open-book professional knowledge exams on all models to evaluate their ability to read long-context. By adjusting the proportion and quantity of general data and medical data in the process of fine-tuning, we can determine the best data composition to optimize the professional model and achieve a balance between long-context performance and specific domain knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qimin Yang (1 paper)
  2. Rongsheng Wang (16 papers)
  3. Jiexin Chen (1 paper)
  4. Runqi Su (1 paper)
  5. Tao Tan (54 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets