2000 character limit reached
Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues? (2011.09708v1)
Published 19 Nov 2020 in cs.CL
Abstract: We study knowledge-grounded dialogue generation with pre-trained LLMs. Instead of pursuing new state-of-the-art on benchmarks, we try to understand if the knowledge stored in parameters of the pre-trained models is already enough to ground open domain dialogues, and thus allows us to get rid of the dependency on external knowledge sources in generation. Through extensive experiments on benchmarks, we find that by fine-tuning with a few dialogues containing knowledge, the pre-trained LLMs can outperform the state-of-the-art model that requires external knowledge in automatic evaluation and human judgment, suggesting a positive answer to the question we raised.
- Yufan Zhao (5 papers)
- Wei Wu (481 papers)
- Can Xu (98 papers)