Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tool Calling: Enhancing Medication Consultation via Retrieval-Augmented Large Language Models (2404.17897v1)

Published 27 Apr 2024 in cs.CL

Abstract: Large-scale LLMs have achieved remarkable success across various language tasks but suffer from hallucinations and temporal misalignment. To mitigate these shortcomings, Retrieval-augmented generation (RAG) has been utilized to provide external knowledge to facilitate the answer generation. However, applying such models to the medical domain faces several challenges due to the lack of domain-specific knowledge and the intricacy of real-world scenarios. In this study, we explore LLMs with RAG framework for knowledge-intensive tasks in the medical field. To evaluate the capabilities of LLMs, we introduce MedicineQA, a multi-round dialogue benchmark that simulates the real-world medication consultation scenario and requires LLMs to answer with retrieved evidence from the medicine database. MedicineQA contains 300 multi-round question-answering pairs, each embedded within a detailed dialogue history, highlighting the challenge posed by this knowledge-intensive task to current LLMs. We further propose a new \textit{Distill-Retrieve-Read} framework instead of the previous \textit{Retrieve-then-Read}. Specifically, the distillation and retrieval process utilizes a tool calling mechanism to formulate search queries that emulate the keyword-based inquiries used by search engines. With experimental results, we show that our framework brings notable performance improvements and surpasses the previous counterparts in the evidence retrieval process in terms of evidence retrieval accuracy. This advancement sheds light on applying RAG to the medical domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhongzhen Huang (15 papers)
  2. Kui Xue (10 papers)
  3. Yongqi Fan (5 papers)
  4. Linjie Mu (4 papers)
  5. Ruoyu Liu (8 papers)
  6. Tong Ruan (22 papers)
  7. Shaoting Zhang (133 papers)
  8. Xiaofan Zhang (79 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com