Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Call for Customized Conversation: Customized Conversation Grounding Persona and Knowledge (2112.08619v3)

Published 16 Dec 2021 in cs.CL and cs.AI

Abstract: Humans usually have conversations by making use of prior knowledge about a topic and background information of the people whom they are talking to. However, existing conversational agents and datasets do not consider such comprehensive information, and thus they have a limitation in generating the utterances where the knowledge and persona are fused properly. To address this issue, we introduce a call For Customized conversation (FoCus) dataset where the customized answers are built with the user's persona and Wikipedia knowledge. To evaluate the abilities to make informative and customized utterances of pre-trained LLMs, we utilize BART and GPT-2 as well as transformer-based models. We assess their generation abilities with automatic scores and conduct human evaluations for qualitative results. We examine whether the model reflects adequate persona and knowledge with our proposed two sub-tasks, persona grounding (PG) and knowledge grounding (KG). Moreover, we show that the utterances of our data are constructed with the proper knowledge and persona through grounding quality assessment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yoonna Jang (9 papers)
  2. Jungwoo Lim (5 papers)
  3. Yuna Hur (4 papers)
  4. Dongsuk Oh (7 papers)
  5. Suhyune Son (4 papers)
  6. Yeonsoo Lee (9 papers)
  7. Donghoon Shin (14 papers)
  8. Seungryong Kim (103 papers)
  9. Heuiseok Lim (49 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.