Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Medical Dialogue Response Generation with Pivotal Information Recalling (2206.08611v1)

Published 17 Jun 2022 in cs.AI

Abstract: Medical dialogue generation is an important yet challenging task. Most previous works rely on the attention mechanism and large-scale pretrained LLMs. However, these methods often fail to acquire pivotal information from the long dialogue history to yield an accurate and informative response, due to the fact that the medical entities usually scatters throughout multiple utterances along with the complex relationships between them. To mitigate this problem, we propose a medical response generation model with Pivotal Information Recalling (MedPIR), which is built on two components, i.e., knowledge-aware dialogue graph encoder and recall-enhanced generator. The knowledge-aware dialogue graph encoder constructs a dialogue graph by exploiting the knowledge relationships between entities in the utterances, and encodes it with a graph attention network. Then, the recall-enhanced generator strengthens the usage of these pivotal information by generating a summary of the dialogue before producing the actual response. Experimental results on two large-scale medical dialogue datasets show that MedPIR outperforms the strong baselines in BLEU scores and medical entities F1 measure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yu Zhao (208 papers)
  2. Yunxin Li (29 papers)
  3. Yuxiang Wu (27 papers)
  4. Baotian Hu (67 papers)
  5. Qingcai Chen (36 papers)
  6. Xiaolong Wang (243 papers)
  7. Yuxin Ding (9 papers)
  8. Min Zhang (630 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.