MTCMB: Evaluating LLMs in Traditional Chinese Medicine
The paper introduces a multi-task benchmark framework—MTCMB—specifically designed for evaluating LLMs within the domain of Traditional Chinese Medicine (TCM). TCM presents unique computational challenges due to its reliance on implicit reasoning, diverse textual forms, and a lack of standardization, which distinguish it from Western medical paradigms. The paper outlines the limitations of existing benchmarks, which are either narrowly focused on factual question answering or lack domain-specific tasks and clinical realism.
Overview of MTCMB
MTCMB evaluates LLMs across five major categories: knowledge question answering (QA), language understanding, diagnostic reasoning, prescription generation, and safety evaluation. It comprises 12 sub-datasets curated in collaboration with certified TCM practitioners, including real-world case records, national licensing exams, and classical texts. The framework integrates domain-specific challenges and safety considerations that are inherent to TCM practices.
Evaluation and Results
The paper evaluates 14 state-of-the-art LLMs across three categories: general LLMs, medical-specialized LLMs, and reasoning-focused LLMs. The evaluation utilizes zero-shot, few-shot, and chain-of-thought prompting techniques. Results indicate that while LLMs excel in factual knowledge retrieval and entity extraction, they face substantial gaps in clinical reasoning, prescription planning, and safety compliance. Models such as GPT-4.1 and Qwen-Max perform well in factual QA but struggle with TCM-specific reasoning tasks, highlighting the need for domain-aligned training paradigms.
Implications
The findings underscore the critical need for benchmarks like MTCMB to guide the development of more competent and trustworthy medical AI systems. The paper advocates for domain-aligned training, hybrid architectures combining deep learning with symbolic reasoning frameworks, and safety-enhanced learning paradigms. These recommendations aim to address the holistic and context-dependent nature of TCM, ensuring safer and more reliable model outputs.
Future Directions
The paper suggests pursuing knowledge modeling frameworks that integrate curated datasets, symbolic reasoning grounded in TCM ontologies, and implementations that enhance safety through rule injection and toxicity filtering. By advancing these areas, researchers could develop LLMs capable of understanding and applying TCM principles effectively in clinical contexts.
In conclusion, MTCMB provides a comprehensive testbed for assessing TCM-specific capabilities of LLMs, offering valuable insights and guidance for developing reliable AI systems in the TCM domain. The framework may play a pivotal role in enhancing the safety and cultural alignment of medical AI systems, although careful oversight is essential to prevent misuse and harmful recommendations.