Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are LLMs Robust for Spoken Dialogues? (2401.02297v1)

Published 4 Jan 2024 in cs.CL

Abstract: Large Pre-Trained LLMs have demonstrated state-of-the-art performance in different downstream tasks, including dialogue state tracking and end-to-end response generation. Nevertheless, most of the publicly available datasets and benchmarks on task-oriented dialogues focus on written conversations. Consequently, the robustness of the developed models to spoken interactions is unknown. In this work, we have evaluated the performance of LLMs for spoken task-oriented dialogues on the DSTC11 test sets. Due to the lack of proper spoken dialogue datasets, we have automatically transcribed a development set of spoken dialogues with a state-of-the-art ASR engine. We have characterized the ASR-error types and their distributions and simulated these errors in a large dataset of dialogues. We report the intrinsic (perplexity) and extrinsic (human evaluation) performance of fine-tuned GPT-2 and T5 models in two subtasks of response generation and dialogue state tracking, respectively. The results show that LLMs are not robust to spoken noise by default, however, fine-tuning/training such models on a proper dataset of spoken TODs can result in a more robust performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Seyed Mahed Mousavi (9 papers)
  2. Gabriel Roccabruna (8 papers)
  3. Simone Alghisi (5 papers)
  4. Massimo Rizzoli (5 papers)
  5. Mirco Ravanelli (72 papers)
  6. Giuseppe Riccardi (26 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com