Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue (2308.03549v3)

Published 7 Aug 2023 in cs.CL

Abstract: Recent advances in LLMs have achieved remarkable breakthroughs in understanding and responding to user intents. However, their performance lag behind general use cases in some expertise domains, such as Chinese medicine. Existing efforts to incorporate Chinese medicine into LLMs rely on Supervised Fine-Tuning (SFT) with single-turn and distilled dialogue data. These models lack the ability for doctor-like proactive inquiry and multi-turn comprehension and cannot align responses with experts' intentions. In this work, we introduce Zhongjing, the first Chinese medical LLaMA-based LLM that implements an entire training pipeline from continuous pre-training, SFT, to Reinforcement Learning from Human Feedback (RLHF). Additionally, we construct a Chinese multi-turn medical dialogue dataset of 70,000 authentic doctor-patient dialogues, CMtMedQA, which significantly enhances the model's capability for complex dialogue and proactive inquiry initiation. We also define a refined annotation rule and evaluation criteria given the unique characteristics of the biomedical domain. Extensive experimental results show that Zhongjing outperforms baselines in various capacities and matches the performance of ChatGPT in some abilities, despite the 100x parameters. Ablation studies also demonstrate the contributions of each component: pre-training enhances medical knowledge, and RLHF further improves instruction-following ability and safety. Our code, datasets, and models are available at https://github.com/SupritYoung/Zhongjing.

Enhancing Chinese Medical Dialogue Capacity in LLMs: The Zhongjing Framework

The paper "Zhongjing: Enhancing the Chinese Medical Capabilities of LLM through Expert Feedback and Real-world Multi-turn Dialogue" introduces a novel framework aimed at advancing the capabilities of LLMs within the specialized context of Chinese medicine. The authors address the limitations faced by current LLMs in handling domains that require deep expertise and nuanced understanding, specifically focusing on Chinese medical knowledge and practices.

Methodological Innovations

Zhongjing establishes a comprehensive training pipeline which combines continuous pre-training, Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF). This distinct approach is supported by the construction of CMtMedQA, an extensive dataset containing 70,000 multi-turn medical dialogues. The methodology stands out due to its integration of real-world doctor-patient conversations, which enhances the model's ability to conduct intricate multi-turn interactions and proactively inquire within dialogues.

Key phases in Zhongjing's development are:

  1. Continuous Pre-training: Leveraging a diverse corpus encompassing medical textbooks, clinical records, and dialogue-based medical interactions to imbue the model with a robust foundation of medical knowledge.
  2. Supervised Fine-Tuning: Utilizing a multimodal instruction dataset that includes both single and multi-turn dialogues, emphasizing proactive inquiry—a crucial aspect in medical consultations.
  3. Reinforcement Learning from Human Feedback: An annotation protocol tailored for the biomedical domain is employed. This involves six medical professionals ranking 20,000 dialogue outputs to train a reward model using Proximal Policy Optimization (PPO). This step is intended to enhance the model's alignment with expert decision-making.

Results and Findings

The experimental studies demonstrate that Zhongjing substantially outperforms current baseline models in various metrics, often nearing the performance of more extensive models like ChatGPT in specific tasks. The inclusion of CMtMedQA significantly contributes to its strengths in handling complex dialogues. Ablation studies affirm the importance of continuous pre-training for medical knowledge integration and the RLHF stage for improving user alignment, safety, and adherence to ethical standards.

Implications and Future Directions

The implications of this work are notable for both academia and industry. The innovative approach exhibited in Zhongjing could inform the development of similar domain-specific LLM applications, promoting a methodology wherein expert feedback and real-world scenarios drive model improvement. The paper hints at promising future applications, such as improved diagnosis assistance and comprehensive healthcare recommendations.

The scope for further research includes refining model safety, expanding the CMtMedQA dataset, and incorporating multimodal data to enrich context understanding. Continued development within this framework may enable LLMs to assist more effectively in clinical settings while addressing the challenge of hallucinations in AI models.

In conclusion, Zhongjing represents a pivotal advancement in applying LLMs effectively within the domain of Chinese medical practice, bringing to light methods for facilitating expert-level dialogue and decision-making in artificial intelligence-based healthcare solutions. The approach underscores the significant potential of employing structured feedback and high-quality domain-specific data in overcoming the current limitations of LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Muppet: Massive Multi-task Representations with Pre-Finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 5799–5811. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics.
  2. Falcon-40B: an open large language model with state-of-the-art performance.
  3. Preliminary Study on the Construction of Chinese Medical Knowledge Graph. JOURNAL OF CHINESE INFORMATION PROCESSING, 33(10): 9.
  4. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023).
  5. Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca. arXiv:2304.08177.
  6. Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca. arXiv preprint arXiv:2304.08177.
  7. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320–335. Dublin, Ireland: Association for Computational Linguistics.
  8. The false promise of imitating proprietary llms. ArXiv preprint, abs/2305.15717.
  9. Textbooks Are All You Need. ArXiv preprint, abs/2306.11644.
  10. MedAlpaca–An Open-Source Collection of Medical Conversational AI Models and Training Data. ArXiv preprint, abs/2304.08247.
  11. Pre-trained models: Past, present and future. AI Open, 2: 225–250.
  12. LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
  13. Huatuo-26M, a Large-scale Chinese Medical QA Dataset. arXiv preprint arXiv:2305.01526.
  14. Decoupled Weight Decay Regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
  15. Crosslingual generalization through multitask finetuning. ArXiv preprint, abs/2211.01786.
  16. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774.
  17. OpenAI, T. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI.
  18. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744.
  19. Instruction tuning with gpt-4. ArXiv preprint, abs/2304.03277.
  20. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, 1–16. IEEE.
  21. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. ArXiv preprint, abs/2210.01241.
  22. Multitask Prompted Training Enables Zero-Shot Task Generalization. In ICLR 2022-Tenth International Conference on Learning Representations. OpenReview.net.
  23. Bloom: A 176b-parameter open-access multilingual language model. ArXiv preprint, abs/2211.05100.
  24. Proximal policy optimization algorithms. ArXiv preprint, abs/1707.06347.
  25. The Curse of Recursion: Training on Generated Data Makes Models Forget. ArXiv preprint, abs/2305.17493.
  26. Large language models encode clinical knowledge. Nature, 1–9.
  27. Towards expert-level medical question answering with large language models. ArXiv preprint, abs/2305.09617.
  28. MOSS: Training Conversational Language Models from Synthetic Data.
  29. Principle-driven self-alignment of language models from scratch with minimal human supervision. ArXiv preprint, abs/2305.03047.
  30. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971.
  31. Huatuo: Tuning llama model with chinese medical knowledge. ArXiv preprint, abs/2304.06975.
  32. Large language models are not fair evaluators. ArXiv preprint, abs/2305.17926.
  33. Self-Instruct: Aligning Language Models with Self-Generated Instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 13484–13508. Toronto, Canada: Association for Computational Linguistics.
  34. Finetuned Language Models are Zero-Shot Learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
  35. Doctorglm: Fine-tuning your chinese doctor is not a herculean task. ArXiv preprint, abs/2304.01097.
  36. Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge. ArXiv preprint, abs/2303.14070.
  37. HuatuoGPT, towards Taming Language Model to Be a Doctor. ArXiv preprint, abs/2305.15075.
  38. Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence. CoRR, abs/2209.02970.
  39. A survey of large language models. ArXiv preprint, abs/2303.18223.
  40. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. ArXiv preprint, abs/2306.05685.
  41. Lima: Less is more for alignment. arXiv preprint arXiv:2305.1120.
  42. ChatMed: A Chinese Medical Large Language Model. https://github.com/michael-wzhu/ChatMed.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Songhua Yang (6 papers)
  2. Hanjie Zhao (8 papers)
  3. Senbin Zhu (4 papers)
  4. Guangyu Zhou (3 papers)
  5. Hongfei Xu (13 papers)
  6. Yuxiang Jia (11 papers)
  7. Hongying Zan (13 papers)
Citations (77)