Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MinMo: A Multimodal Large Language Model for Seamless Voice Interaction (2501.06282v1)

Published 10 Jan 2025 in cs.CL, cs.AI, cs.HC, cs.SD, and eess.AS

Abstract: Recent advancements in LLMs and multimodal speech-text models have laid the groundwork for seamless voice interactions, enabling real-time, natural, and human-like conversations. Previous models for voice interactions are categorized as native and aligned. Native models integrate speech and text processing in one framework but struggle with issues like differing sequence lengths and insufficient pre-training. Aligned models maintain text LLM capabilities but are often limited by small datasets and a narrow focus on speech tasks. In this work, we introduce MinMo, a Multimodal LLM with approximately 8B parameters for seamless voice interaction. We address the main limitations of prior aligned multimodal models. We train MinMo through multiple stages of speech-to-text alignment, text-to-speech alignment, speech-to-speech alignment, and duplex interaction alignment, on 1.4 million hours of diverse speech data and a broad range of speech tasks. After the multi-stage training, MinMo achieves state-of-the-art performance across various benchmarks for voice comprehension and generation while maintaining the capabilities of text LLMs, and also facilitates full-duplex conversation, that is, simultaneous two-way communication between the user and the system. Moreover, we propose a novel and simple voice decoder that outperforms prior models in voice generation. The enhanced instruction-following capabilities of MinMo supports controlling speech generation based on user instructions, with various nuances including emotions, dialects, and speaking rates, and mimicking specific voices. For MinMo, the speech-to-text latency is approximately 100ms, full-duplex latency is approximately 600ms in theory and 800ms in practice. The MinMo project web page is https://funaudioLLM.github.io/minmo, and the code and models will be released soon.

MinMo: A Multimodal LLM for Seamless Voice Interaction

The paper "MinMo: A Multimodal LLM for Seamless Voice Interaction" by the FunAudioLLM Team from Alibaba Group introduces MinMo, a multimodal LLM designed to optimize voice interactions through seamless integration of speech and text modalities. This essay provides an expert overview of the key methodologies, results, and implications of this research.

Overview of MinMo

MinMo is a sophisticated multimodal LLM with approximately 8 billion parameters, developed to address the limitations observed in earlier speech-text models for seamless voice interaction. Existing models are generally bifurcated into native multimodal models, which struggle with sequence length discrepancies between speech and text, and aligned multimodal models, which face challenges with large-scale speech data and complex task execution.

Methodology

MinMo adopts a strategic multi-stage training approach to align both speech and text modalities. This involves:

  1. Speech-to-Text Alignment: Utilizing large-scale speech data to align the audio input latent space with a pre-trained text LLM.
  2. Text-to-Speech Alignment: Developing an Output Projector and Voice Token LM to bridge the semantic representations of text into the audio output latent space.
  3. Speech-to-Speech Alignment: Enhancing audio-to-audio interactions using substantial paired audio data, enabling nuanced control over speech style and delivery based on user instructions.
  4. Duplex Interaction Alignment: Implementing a full-duplex prediction module to facilitate real-time two-way communication, allowing the system to manage simultaneous speaking and listening tasks effectively.

Results

MinMo achieves state-of-the-art performance across multiple benchmarks, demonstrating superior capabilities in speech comprehension, generation, and full-duplex interaction. Notable results include:

  • Speech Recognition and Translation: MinMo surpasses existing models like Whisper and Qwen2-Audio in both ASR and multilingual speech translation tasks. It effectively maintains its performance without requiring language identification as a prompt.
  • Speech Emotion and Audio Event Recognition: MinMo improves upon previous models in understanding complex speech attributes such as emotion and audio events, showing particular prowess in cross-lingual emotion recognition.
  • Instruction-Following Voice Generation: The model excels in generating speech that conforms to diverse user instructions regarding style and emotion, achieving high accuracy in instruction adherence.

Implications and Future Developments

MinMo represents a significant enhancement in voice interaction systems, addressing the critical limitations of sequence length discrepancies and data imbalances while preserving the capabilities of text-based LLMs. Its ability to manage full-duplex interactions seamlessly underlines its potential for real-time applications, paving the way for more sophisticated voice-driven interfaces in AI systems.

Future developments could build upon MinMo's architecture to further reduce latency, expand the range of supported languages and dialects, and enhance instruction-following capabilities through more extensive training and data scaling. MinMo sets a precedent for integrating multimodal capabilities into LLMs, advancing the boundaries of voice interaction systems in AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (36)
  1. Qian Chen (264 papers)
  2. Yafeng Chen (26 papers)
  3. Yanni Chen (11 papers)
  4. Mengzhe Chen (6 papers)
  5. Yingda Chen (13 papers)
  6. Chong Deng (22 papers)
  7. Zhihao Du (30 papers)
  8. Ruize Gao (11 papers)
  9. Changfeng Gao (7 papers)
  10. Zhifu Gao (28 papers)
  11. Yabin Li (4 papers)
  12. Xiang Lv (15 papers)
  13. Jiaqing Liu (20 papers)
  14. Haoneng Luo (7 papers)
  15. Bin Ma (78 papers)
  16. Chongjia Ni (18 papers)
  17. Xian Shi (50 papers)
  18. Jialong Tang (17 papers)
  19. Hui Wang (371 papers)
  20. Hao Wang (1119 papers)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com