Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GPT-4 can pass the Korean National Licensing Examination for Korean Medicine Doctors (2303.17807v2)

Published 31 Mar 2023 in cs.CL and cs.LG

Abstract: Traditional Korean medicine (TKM) emphasizes individualized diagnosis and treatment. This uniqueness makes AI modeling difficult due to limited data and implicit processes. LLMs have demonstrated impressive medical inference, even without advanced training in medical texts. This study assessed the capabilities of GPT-4 in TKM, using the Korean National Licensing Examination for Korean Medicine Doctors (K-NLEKMD) as a benchmark. The K-NLEKMD, administered by a national organization, encompasses 12 major subjects in TKM. We optimized prompts with Chinese-term annotation, English translation for questions and instruction, exam-optimized instruction, and self-consistency. GPT-4 with optimized prompts achieved 66.18% accuracy, surpassing both the examination's average pass mark of 60% and the 40% minimum for each subject. The gradual introduction of language-related prompts and prompting techniques enhanced the accuracy from 51.82% to its maximum accuracy. GPT-4 showed low accuracy in subjects including public health & medicine-related law, internal medicine (2) which are localized in Korea and TKM. The model's accuracy was lower for questions requiring TKM-specialized knowledge. It exhibited higher accuracy in diagnosis-based and recall-based questions than in intervention-based questions. A positive correlation was observed between the consistency and accuracy of GPT-4's responses. This study unveils both the potential and challenges of applying LLMs to TKM. These findings underline the potential of LLMs like GPT-4 in culturally adapted medicine, especially TKM, for tasks such as clinical assistance, medical education, and research. But they also point towards the necessity for the development of methods to mitigate cultural bias inherent in LLMs and validate their efficacy in real-world clinical settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dongyeop Jang (1 paper)
  2. Tae-Rim Yun (3 papers)
  3. Choong-Yeol Lee (1 paper)
  4. Young-Kyu Kwon (1 paper)
  5. Chang-Eop Kim (5 papers)
Citations (22)