Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Automatic Evaluation for LLMs' Clinical Capabilities: Metric, Data, and Algorithm (2403.16446v1)

Published 25 Mar 2024 in cs.CL

Abstract: LLMs are gaining increasing interests to improve clinical efficiency for medical diagnosis, owing to their unprecedented performance in modelling natural language. Ensuring the safe and reliable clinical applications, the evaluation of LLMs indeed becomes critical for better mitigating the potential risks, e.g., hallucinations. However, current evaluation methods heavily rely on labor-intensive human participation to achieve human-preferred judgements. To overcome this challenge, we propose an automatic evaluation paradigm tailored to assess the LLMs' capabilities in delivering clinical services, e.g., disease diagnosis and treatment. The evaluation paradigm contains three basic elements: metric, data, and algorithm. Specifically, inspired by professional clinical practice pathways, we formulate a LLM-specific clinical pathway (LCP) to define the clinical capabilities that a doctor agent should possess. Then, Standardized Patients (SPs) from the medical education are introduced as the guideline for collecting medical data for evaluation, which can well ensure the completeness of the evaluation procedure. Leveraging these steps, we develop a multi-agent framework to simulate the interactive environment between SPs and a doctor agent, which is equipped with a Retrieval-Augmented Evaluation (RAE) to determine whether the behaviors of a doctor agent are in accordance with LCP. The above paradigm can be extended to any similar clinical scenarios to automatically evaluate the LLMs' medical capabilities. Applying such paradigm, we construct an evaluation benchmark in the field of urology, including a LCP, a SPs dataset, and an automated RAE. Extensive experiments are conducted to demonstrate the effectiveness of the proposed approach, providing more insights for LLMs' safe and reliable deployments in clinical practice.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Lei Liu (332 papers)
  2. Xiaoyan Yang (50 papers)
  3. Fangzhou Li (5 papers)
  4. Chenfei Chi (3 papers)
  5. Yue Shen (243 papers)
  6. Shiwei Lyu Ming Zhang (1 paper)
  7. Xiaowei Ma (3 papers)
  8. Xiangguo Lyu (1 paper)
  9. Liya Ma (5 papers)
  10. Zhiqiang Zhang (129 papers)
  11. Wei Xue (149 papers)
  12. Yiran Huang (13 papers)
  13. Jinjie Gu (50 papers)
Citations (5)