Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models (2309.12940v1)
Abstract: Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but LLMs often struggle to comprehend these intricate contexts. In this study, we propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.
- Haoyu Gao (17 papers)
- Ting-En Lin (28 papers)
- Hangyu Li (23 papers)
- Min Yang (239 papers)
- Yuchuan Wu (33 papers)
- Wentao Ma (35 papers)
- Yongbin Li (128 papers)