Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs (2311.09774v2)

Published 16 Nov 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Adapting a LLM into a specific domain, a.k.a `domain adaption', is a common practice when specialized knowledge, e.g. medicine, is not encapsulated in a general LLM like Llama2. The challenge lies in the heterogeneity of data across the two training stages, as it varies in languages, genres, or formats. To tackle this and simplify the learning protocol, we propose to transform heterogeneous data, from the both pre-training and supervised stages, into a unified, simple input-output pair format. We validate the new protocol in the domains where proprietary LLMs like ChatGPT perform relatively poorly, such as Traditional Chinese Medicine. The developed model, HuatuoGPT-II, has shown state-of-the-art performance in Chinese medicine domain on a number of benchmarks, e.g. medical licensing exams. It even outperforms proprietary models like ChatGPT and GPT-4 in some aspects, especially in Traditional Chinese Medicine. Expert manual evaluations further validate HuatuoGPT-II's advantages over existing LLMs. Notably, HuatuoGPT-II was benchmarked in a fresh Chinese National Medical Licensing Examination where it achieved the best performance, showcasing not only its effectiveness but also its generalization capabilities.

An Overview of "HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs"

The paper presents HuatuoGPT-II, a novel approach for adapting LLMs to the medical domain, specifically Traditional Chinese Medicine (TCM). The conventional two-stage adaptation process, involving continued pre-training and supervised fine-tuning, is replaced with a unified one-stage protocol. This simplifies the domain adaptation by transforming heterogeneous pre-training data into a uniform input-output format compatible with fine-tuning data.

Methodology

HuatuoGPT-II is trained using a comprehensive methodology to enhance its performance in the medical domain:

  1. Data Collection: The authors collect a substantial corpus of both Chinese and English medical texts from diverse sources, including textbooks, online encyclopedias, and academic papers. This is filtered and processed to ensure quality.
  2. Data Unification: A significant innovation lies in converting varied pre-training data into a uniform format of (instruction, output) pairs. Leveraging LLMs, this process aligns the pre-training data with fine-tuning data, ensuring consistency and mitigating ethical concerns.
  3. One-stage Training: An integrated dataset is created from the unified data, which is then subjected to a single training phase. A priority sampling strategy is employed to concentrate on domain knowledge initially, gradually transitioning to fine-tuning tasks.

Results and Implications

HuatuoGPT-II's performance is validated on multiple benchmarks, including the Chinese National Medical Licensing Examination. The model demonstrates superior performance to existing models, both open-source and proprietary, within specific evaluation parameters. Notably, it excels in the field of Traditional Chinese Medicine, outperforming even GPT-4 in certain contexts.

The success of HuatuoGPT-II underscores several implications for AI research and application:

  1. Simplification of Training Processes: The one-stage training model introduces a streamlined approach, potentially applicable to other domains requiring specialized language understanding, such as law or finance.
  2. Generalization and Benchmarking: The model shows robust generalization capabilities, verified through innovative benchmark tests, including real-time exams, reducing potential biases and enhancing evaluation integrity.
  3. Future Developments: The framework sets a precedent for reducing complexity in domain adaptation of LLMs. It highlights the importance of domain-specific data transformation and the potential efficiency gains from moving away from traditional two-stage training protocols.

Conclusion

HuatuoGPT-II represents a significant advancement in the medical application of LLMs through its unified training approach. It effectively addresses the challenges associated with traditional adaptation methods, offering insights into the potential for more streamlined and effective model training across various specialized domains. As the field progresses, this work could inspire further innovations in simplifying LLM adaptation processes, enhancing their practical deployment in specialized areas.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Junying Chen (26 papers)
  2. Xidong Wang (30 papers)
  3. Anningzhe Gao (22 papers)
  4. Feng Jiang (97 papers)
  5. Shunian Chen (15 papers)
  6. Hongbo Zhang (54 papers)
  7. Dingjie Song (17 papers)
  8. Wenya Xie (8 papers)
  9. Chuyi Kong (3 papers)
  10. Jianquan Li (18 papers)
  11. Xiang Wan (93 papers)
  12. Haizhou Li (285 papers)
  13. Benyou Wang (109 papers)
  14. Ke Ji (27 papers)
Citations (48)
X Twitter Logo Streamline Icon: https://streamlinehq.com