Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
Abstract: Recent advances in LLMs have achieved remarkable breakthroughs in understanding and responding to user intents. However, their performance lag behind general use cases in some expertise domains, such as Chinese medicine. Existing efforts to incorporate Chinese medicine into LLMs rely on Supervised Fine-Tuning (SFT) with single-turn and distilled dialogue data. These models lack the ability for doctor-like proactive inquiry and multi-turn comprehension and cannot align responses with experts' intentions. In this work, we introduce Zhongjing, the first Chinese medical LLaMA-based LLM that implements an entire training pipeline from continuous pre-training, SFT, to Reinforcement Learning from Human Feedback (RLHF). Additionally, we construct a Chinese multi-turn medical dialogue dataset of 70,000 authentic doctor-patient dialogues, CMtMedQA, which significantly enhances the model's capability for complex dialogue and proactive inquiry initiation. We also define a refined annotation rule and evaluation criteria given the unique characteristics of the biomedical domain. Extensive experimental results show that Zhongjing outperforms baselines in various capacities and matches the performance of ChatGPT in some abilities, despite the 100x parameters. Ablation studies also demonstrate the contributions of each component: pre-training enhances medical knowledge, and RLHF further improves instruction-following ability and safety. Our code, datasets, and models are available at https://github.com/SupritYoung/Zhongjing.
- Muppet: Massive Multi-task Representations with Pre-Finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 5799–5811. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics.
- Falcon-40B: an open large language model with state-of-the-art performance.
- Preliminary Study on the Construction of Chinese Medical Knowledge Graph. JOURNAL OF CHINESE INFORMATION PROCESSING, 33(10): 9.
- Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023).
- Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca. arXiv:2304.08177.
- Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca. arXiv preprint arXiv:2304.08177.
- GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320–335. Dublin, Ireland: Association for Computational Linguistics.
- The false promise of imitating proprietary llms. ArXiv preprint, abs/2305.15717.
- Textbooks Are All You Need. ArXiv preprint, abs/2306.11644.
- MedAlpaca–An Open-Source Collection of Medical Conversational AI Models and Training Data. ArXiv preprint, abs/2304.08247.
- Pre-trained models: Past, present and future. AI Open, 2: 225–250.
- LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
- Huatuo-26M, a Large-scale Chinese Medical QA Dataset. arXiv preprint arXiv:2305.01526.
- Decoupled Weight Decay Regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
- Crosslingual generalization through multitask finetuning. ArXiv preprint, abs/2211.01786.
- OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774.
- OpenAI, T. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI.
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744.
- Instruction tuning with gpt-4. ArXiv preprint, abs/2304.03277.
- Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, 1–16. IEEE.
- Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. ArXiv preprint, abs/2210.01241.
- Multitask Prompted Training Enables Zero-Shot Task Generalization. In ICLR 2022-Tenth International Conference on Learning Representations. OpenReview.net.
- Bloom: A 176b-parameter open-access multilingual language model. ArXiv preprint, abs/2211.05100.
- Proximal policy optimization algorithms. ArXiv preprint, abs/1707.06347.
- The Curse of Recursion: Training on Generated Data Makes Models Forget. ArXiv preprint, abs/2305.17493.
- Large language models encode clinical knowledge. Nature, 1–9.
- Towards expert-level medical question answering with large language models. ArXiv preprint, abs/2305.09617.
- MOSS: Training Conversational Language Models from Synthetic Data.
- Principle-driven self-alignment of language models from scratch with minimal human supervision. ArXiv preprint, abs/2305.03047.
- Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971.
- Huatuo: Tuning llama model with chinese medical knowledge. ArXiv preprint, abs/2304.06975.
- Large language models are not fair evaluators. ArXiv preprint, abs/2305.17926.
- Self-Instruct: Aligning Language Models with Self-Generated Instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 13484–13508. Toronto, Canada: Association for Computational Linguistics.
- Finetuned Language Models are Zero-Shot Learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
- Doctorglm: Fine-tuning your chinese doctor is not a herculean task. ArXiv preprint, abs/2304.01097.
- Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge. ArXiv preprint, abs/2303.14070.
- HuatuoGPT, towards Taming Language Model to Be a Doctor. ArXiv preprint, abs/2305.15075.
- Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence. CoRR, abs/2209.02970.
- A survey of large language models. ArXiv preprint, abs/2303.18223.
- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. ArXiv preprint, abs/2306.05685.
- Lima: Less is more for alignment. arXiv preprint arXiv:2305.1120.
- ChatMed: A Chinese Medical Large Language Model. https://github.com/michael-wzhu/ChatMed.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.