2000 character limit reached
Instituto de Telecomunicações at IWSLT 2025: Aligning Small-Scale Speech and Language Models for Speech-to-Text Learning (2506.17019v1)
Published 20 Jun 2025 in cs.CL and cs.AI
Abstract: This paper presents the IT-IST submission to the IWSLT 2025 Shared Task on Instruction Following Speech Processing. We submit results for the Short Track, i.e., speech recognition, translation, and spoken question answering. Our model is a unified speech-to-text model that integrates a pre-trained continuous speech encoder and text decoder through a first phase of modality alignment and a second phase of instruction fine-tuning. Crucially, we focus on using small-scale LLM backbones (< 2B) and restrict to high-quality, CC-BY data along with synthetic data generation to supplement existing resources.