Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MedChatZH: a Better Medical Adviser Learns from Better Instructions (2309.01114v1)

Published 3 Sep 2023 in cs.CL and cs.AI

Abstract: Generative LLMs have shown great success in various applications, including question-answering (QA) and dialogue systems. However, in specialized domains like traditional Chinese medical QA, these models may perform unsatisfactorily without fine-tuning on domain-specific datasets. To address this, we introduce MedChatZH, a dialogue model designed specifically for traditional Chinese medical QA. Our model is pre-trained on Chinese traditional medical books and fine-tuned with a carefully curated medical instruction dataset. It outperforms several solid baselines on a real-world medical dialogue dataset. We release our model, code, and dataset on https://github.com/tyang816/MedChatZH to facilitate further research in the domain of traditional Chinese medicine and LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yang Tan (39 papers)
  2. Mingchen Li (50 papers)
  3. Zijie Huang (29 papers)
  4. Huiqun Yu (8 papers)
  5. Guisheng Fan (10 papers)
Citations (6)