Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Full-duplex Speech Dialogue Scheme Based On Large Language Models (2405.19487v2)

Published 29 May 2024 in cs.CL

Abstract: We present a generative dialogue system capable of operating in a full-duplex manner, allowing for seamless interaction. It is based on a LLM carefully aligned to be aware of a perception module, a motor function module, and the concept of a simple finite state machine (called neural FSM) with two states. The perception and motor function modules operate in tandem, allowing the system to speak and listen to the user simultaneously. The LLM generates textual tokens for inquiry responses and makes autonomous decisions to start responding to, wait for, or interrupt the user by emitting control tokens to the neural FSM. All these tasks of the LLM are carried out as next token prediction on a serialized view of the dialogue in real-time. In automatic quality evaluations simulating real-life interaction, the proposed system reduces the average conversation response latency by more than threefold compared with LLM-based half-duplex dialogue systems while responding within less than 500 milliseconds in more than 50% of evaluated interactions. Running an LLM with only 8 billion parameters, our system exhibits an 8% higher interruption precision rate than the best available commercial LLM for voice-based dialogue.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Peng Wang (831 papers)
  2. Songshuo Lu (2 papers)
  3. Yaohua Tang (9 papers)
  4. Sijie Yan (11 papers)
  5. Yuanjun Xiong (52 papers)
  6. Wei Xia (147 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets