WhisperFlow: speech foundation models in real time (2412.11272v2)
Abstract: Speech foundation models, such as OpenAI's Whisper, become the state of the art in speech understanding due to their strong accuracy and generalizability. Yet, their applications are mostly limited to processing pre-recorded speech, whereas processing of streaming speech, in particular doing it efficiently, remains rudimentary. Behind this inefficiency are multiple fundamental reasons: (1) speech foundation models are trained to process long, fixed-length voice inputs (often 30 seconds); (2) encoding each voice input requires encoding as many as 1,500 tokens with tens of transformer layers; (3) decoding each output entails an irregular, complex beam search. As such, streaming speech processing on resource-constrained client devices is more expensive than other AI tasks, e.g., text generation. To this end, we present a novel framework, WhisperFlow, which embodies both model and system optimizations. (1) Hush word as a short, learnable audio segment; appended to a voice input, a hush word gracefully stops the speech model from processing more input without hallucination; (2) Beam pruning, which aligns streaming audio buffers over time and reuses results from earlier decoding rounds, therefore significantly accelerating decoding; and (3) CPU/GPU pipelining, which not only maps to the encoding/decoding stages dynamically, but also tunes to an optimal resource ratio, respecting the encoding/decoding speed that varies across voice inputs, models, and hardware. We test WhisperFlow on commodity ARM platforms with 4-12 CPU cores and 10-30 GPU cores. It reduces per-word latency by 1.6x-4.7x to as low as 0.5 second, while seeing negligible accuracy degradation. On an entry-level MacBook Air, WhisperFlow can keep the per-word latency around 1 second, with the whole device drawing only 7 Watts in total.