Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Artificially Synthesising Data for Audio Classification and Segmentation to Improve Speech and Music Detection in Radio Broadcast (2102.09959v1)

Published 19 Feb 2021 in eess.AS, cs.LG, and cs.SD

Abstract: Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a pre-processing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large datasets to train deep neural networks for audio segmentation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Satvik Venkatesh (6 papers)
  2. David Moffat (3 papers)
  3. Alexis Kirke (3 papers)
  4. Gözel Shakeri (4 papers)
  5. Stephen Brewster (1 paper)
  6. Jörg Fachner (1 paper)
  7. Helen Odell-Miller (1 paper)
  8. Alex Street (1 paper)
  9. Nicolas Farina (1 paper)
  10. Sube Banerjee (2 papers)
  11. Eduardo Reck Miranda (8 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.