Papers
Topics
Authors
Recent
2000 character limit reached

Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation (2210.13248v3)

Published 24 Oct 2022 in eess.AS and cs.SD

Abstract: Most automatic speech processing systems register degraded performance when applied to noisy or reverberant speech. But how can one tell whether speech is noisy or reverberant? We propose Brouhaha, a neural network jointly trained to extract speech/non-speech segments, speech-to-noise ratios, and C50room acoustics from single-channel recordings. Brouhaha is trained using a data-driven approach in which noisy and reverberant audio segments are synthesized. We first evaluate its performance and demonstrate that the proposed multi-task regime is beneficial. We then present two scenarios illustrating how Brouhaha can be used on naturally noisy and reverberant data: 1) to investigate the errors made by a speaker diarization model (pyannote.audio); and 2) to assess the reliability of an automatic speech recognition model (Whisper from OpenAI). Both our pipeline and a pretrained model are open source and shared with the speech community.

Citations (19)

Summary

  • The paper introduces a multi-task neural network that jointly performs voice activity detection, SNR estimation, and C50 room acoustics estimation.
  • It employs a SincNet and LSTM architecture trained on over 1,250 hours of synthetic noisy and reverberant data to enhance performance.
  • The results show a VAD F-score of 93.7% and an SNR MAE of 2.3 dB, providing actionable insights for improving real-world speech processing.

Multi-task Learning for Improved Speech Processing in Noisy Environments

The paper presents "Brouhaha," a multi-task neural network that performs voice activity detection (VAD), speech-to-noise ratio (SNR), and C50 room acoustics estimation from single channel audio recordings. The innovation lies in its joint training regime that integrates these tasks, a feature posited to enhance robustness and effectiveness across audio environments typified by noise and reverberation. This work is of considerable interest to researchers exploring techniques to augment the performance of automatic speech recognition (ASR) and speaker diarization systems operating under acoustically challenging conditions.

Methodology

The study embarks on addressing the common problem of signal degradation in speech processing systems when exposed to noisy or reverberant audio. The Brouhaha model is trained on a synthetic dataset, generated by introducing controlled noise and reverberation to clean speech segments. The multi-task training approach involves optimizing a loss function that combines VAD, SNR, and C50 estimation tasks, suggesting that shared model parameters across tasks can lead to improved outcomes (with a significant increase in the estimation accuracy of acoustic parameters like C50).

The multi-task regime applied by Brouhaha leverages a stack of SincNet for feature extraction followed by LSTM layers to accommodate temporal dependencies in speech. The architecture is optimized using extensive trials over various hyperparameters. The training and testing are conducted on a robust dataset exceeding 1,250 hours, which integrates voice, noise, and room impulse responses (RIRs).

Results and Evaluation

Evaluation metrics indicate strong performance on synthetic data, with the proposed system achieving a VAD F-score of 93.7%, outperforming state-of-the-art systems like pyannote.audio on this task. Notably, Brouhaha shows a mean absolute error (MAE) of 2.3 dB in SNR estimation for unseen synthetic audio, marking a substantial gain over heuristic approaches.

For real-world data, though Brouhaha trails behind established systems on VAD tasks (with a drop to 77.2% on naturalistic child-centered recordings), it still offers significant insights for error analysis in speaker diarization and ASR systems. The paper demonstrates the potential of Brouhaha in diagnosing and improving system performance in SNR and reverberant conditions. Additionally, the C50C_{50} estimation capability of the model is validated using the BUT Speech@FIT Reverb dataset, with impressive correlation results that showcase accuracy across varying room configurations.

Implications and Future Directions

The practical implications of this research are particularly relevant for real-time applications where audio clarity is pivotal, such as telecommunication, multimedia broadcasting, and human-computer interaction. Moreover, it opens interactions for further exploration in downstream tasks like SNR-based speech enhancement and room acoustics-informed microphone selection.

The paper suggests future work could focus on expanding the range of acoustic parameters estimated by the model and improving performance with data from spontaneous speech utterances. Overall, Brouhaha signifies an important step in leveraging multi-task learning to enhance speech technology scalability in real-world environments characterized by acoustic challenges.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.