Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations (2406.10809v1)

Published 16 Jun 2024 in cs.CL and cs.AI

Abstract: Despite the striking advances in recent language generation performance, model-generated responses have suffered from the chronic problem of hallucinations that are either untrue or unfaithful to a given source. Especially in the task of knowledge grounded conversation, the models are required to generate informative responses, but hallucinated utterances lead to miscommunication. In particular, entity-level hallucination that causes critical misinformation and undesirable conversation is one of the major concerns. To address this issue, we propose a post-hoc refinement method called REM. It aims to enhance the quality and faithfulness of hallucinated utterances by refining them based on the source knowledge. If the generated utterance has a low source-faithfulness score with the given knowledge, REM mines the key entities in the knowledge and implicitly uses them for refining the utterances. We verify that our method reduces entity hallucination in the utterance. Also, we show the adaptability and efficacy of REM with extensive experiments and generative results. Our code is available at https://github.com/YOONNAJANG/REM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yoonna Jang (9 papers)
  2. Suhyune Son (4 papers)
  3. Jeongwoo Lee (35 papers)
  4. Junyoung Son (4 papers)
  5. Yuna Hur (4 papers)
  6. Jungwoo Lim (5 papers)
  7. Hyeonseok Moon (20 papers)
  8. Kisu Yang (7 papers)
  9. Heuiseok Lim (49 papers)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub