Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning (2406.00922v3)

Published 3 Jun 2024 in cs.CL and cs.AI

Abstract: Users typically engage with LLMs interactively, yet most existing benchmarks evaluate them in a static, single-turn format, posing reliability concerns in interactive scenarios. We identify a key obstacle towards reliability: LLMs are trained to answer any question, even with incomplete context or insufficient knowledge. In this paper, we propose to change the static paradigm to an interactive one, develop systems that proactively ask questions to gather more information and respond reliably, and introduce an benchmark - MediQ - to evaluate question-asking ability in LLMs. MediQ simulates clinical interactions consisting of a Patient System and an adaptive Expert System; with potentially incomplete initial information, the Expert refrains from making diagnostic decisions when unconfident, and instead elicits missing details via follow-up questions. We provide a pipeline to convert single-turn medical benchmarks into an interactive format. Our results show that directly prompting state-of-the-art LLMs to ask questions degrades performance, indicating that adapting LLMs to proactive information-seeking settings is nontrivial. We experiment with abstention strategies to better estimate model confidence and decide when to ask questions, improving diagnostic accuracy by 22.3%; however, performance still lags compared to an (unrealistic in practice) upper bound with complete information upfront. Further analyses show improved interactive performance with filtering irrelevant contexts and reformatting conversations. Overall, we introduce a novel problem towards LLM reliability, an interactive MediQ benchmark and a novel question-asking system, and highlight directions to extend LLMs' information-seeking abilities in critical domains.

Evaluating the M EDI Q Framework for Clinical Reasoning with LLMs

The paper, "M EDI Q: Question-Asking LLMs for Adaptive and Reliable Clinical Reasoning," addresses significant challenges in deploying language learning models (LLMs) in medical environments. This research introduces the M EDI Q framework—an interactive structure meant to advance the diagnostic potential of LLMs through dynamically simulated clinical consultations. This paper tackles a notable weakness of current LLMs: their tendency to answer questions without adequate contextual information, which can lead to unreliable outputs in critical applications such as healthcare.

In conventional medical QA tasks, LLMs are typically engaged in single-turn interactions, where they receive complete information upfront and then produce answers. This setting contrasts sharply with real-world clinical scenarios, where patient information is often incomplete, necessitating an iterative process of information-seeking by clinicians to formulate a correct diagnosis. Hence, the paper proposes an interactive framework featuring two main components: the Patient system, which simulates a patient providing partial information initially, and the Expert system, which embodies a doctor's assistant, tasked with seeking additional data as needed before delivering a diagnostic verdict.

Significantly, the authors apply a REST framework to two existing medical QA datasets—MED QA and CRAFT-MD—converting them into interactive setups. They benchmarked the performance of state-of-the-art LLMs like GPT-3.5, Llama-3, and GPT-4 in this interactive environment. Findings reveal that direct prompting of these models to ask questions adversely impacts the quality of clinical reasoning, displaying a clear gap between static and interactive diagnostic accuracy.

Quantitatively, the M EDI Q framework improves GPT-3.5 performance on incomplete information by 22.3% through confidence estimation strategies that incorporate explicit reasoning and self-consistency. Nevertheless, models operating under such setups still underperform by 10.3% compared to scenarios where complete information is provided initially. These outcomes suggest that while utilizing interaction strategies enhances models' diagnostic accuracy, there is substantial potential for continued improvement.

The paper's main contributions include introducing the M EDI Q framework for more realistic medical consultations, developing a reliable Patient system for simulating human interactions, and enhancing the understanding of LLMs' inadequacy in proactive information-seeking. Additionally, their proposed M EDI Q-Expert system partially bridges the gap between perfect and incomplete information setups through innovative abstention strategies, reducing unconfident answer generation. Importantly, the authors have made their code and data publicly accessible, which facilitates future research.

The practical implications of this research are profound, as they highlight the potential of integrating information-seeking behaviors in LLMs to bring artificial intelligence one step closer to being a reliable tool in clinical settings. Theoretically, this work expands on the interactive potential and shortcomings of LLMs, pointing toward a critical area for further development—equipping LLMs with nuanced information-seeking capabilities that mirror those in real-world clinical scenarios.

As AI development continues, enhancing these interactive frameworks and addressing identified gaps will be essential. Future research could focus on refining the M EDI Q framework by improving patient interaction simulations, integrating knowledge from diverse medical datasets, and exploring collaborative decision-making processes involving humans and AI. By fostering the advancement of LLMs in active clinical reasoning tasks, these developments may significantly contribute to the wider adoption and trust of AI systems in the healthcare domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shuyue Stella Li (22 papers)
  2. Vidhisha Balachandran (31 papers)
  3. Shangbin Feng (53 papers)
  4. Emma Pierson (38 papers)
  5. Pang Wei Koh (64 papers)
  6. Yulia Tsvetkov (142 papers)
  7. Jonathan S. Ilgen (2 papers)
Citations (3)
Youtube Logo Streamline Icon: https://streamlinehq.com