Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

COSMIC: Data Efficient Instruction-tuning For Speech In-Context Learning (2311.02248v2)

Published 3 Nov 2023 in cs.CL, cs.AI, and eess.AS

Abstract: We present a cost-effective method to integrate speech into a LLM, resulting in a Contextual Speech Model with Instruction-following/in-context-learning Capabilities (COSMIC) multi-modal LLM. Using GPT-3.5, we generate Speech Comprehension Test Question-Answer (SQA) pairs from speech transcriptions for supervised instruction tuning. With under 30 million trainable parameters and only 450 hours of English speech data, COSMIC demonstrates emerging capabilities in instruction-following and in-context learning. Equipped with such capabilities, COSMIC achieves a maximum 33.18 BLEU score in 0-shot EN-to-X speech to text translation (S2TT) and a significant boost in the 1-shot setting. Additionally, there is an average 25.8\% relative Word Error Rate (WER) reduction for 1-shot cross-domain adaptation. COSMIC exhibits a significant automatic speech recognition (ASR) accuracy gain in contextual biasing tasks due to its instruction-following capability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jing Pan (25 papers)
  2. Jian Wu (314 papers)
  3. Yashesh Gaur (43 papers)
  4. Sunit Sivasankaran (11 papers)
  5. Zhuo Chen (319 papers)
  6. Shujie Liu (101 papers)
  7. Jinyu Li (164 papers)
Citations (23)