Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models (2309.12940v1)

Published 22 Sep 2023 in cs.CL and cs.AI

Abstract: Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but LLMs often struggle to comprehend these intricate contexts. In this study, we propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haoyu Gao (17 papers)
  2. Ting-En Lin (28 papers)
  3. Hangyu Li (23 papers)
  4. Min Yang (239 papers)
  5. Yuchuan Wu (33 papers)
  6. Wentao Ma (35 papers)
  7. Yongbin Li (128 papers)
Citations (6)