PSELDNets: Pre-trained Neural Networks on a Large-scale Synthetic Dataset for Sound Event Localization and Detection (2411.06399v2)
Abstract: Sound event localization and detection (SELD) has seen substantial advancements through learning-based methods. These systems, typically trained from scratch on specific datasets, have shown considerable generalization capabilities. Recently, deep neural networks trained on large-scale datasets have achieved remarkable success in the sound event classification (SEC) field, prompting an open question of whether these advances can be extended to the development of SELD foundation models. In this paper, leveraging the power of pre-trained SEC models, we propose pre-trained SELD networks (PSELDNets) on a large-scale synthetic dataset. The synthetic dataset, generated by convolving sound events with simulated spatial room impulse responses (SRIRs), contains 1,167 hours of audio clips with an ontology of 170 sound classes. These PSELDNets are applied to various SELD scenarios. When we adapt PSELDNets to specific scenarios, particularly in cases of low-resource data, we introduce a data-efficient fine-tuning method, AdapterBit. PSELDNets are evaluated on synthetic-test-set using collected SRIRs from the TAU Spatial Room Impulse Response Database (TAU-SRIR DB) and achieve satisfactory performance. We also carried out experiments to validate the transferability of PSELDNets to three publicly available datasets and our own real-world recordings. The results demonstrate that PSELDNets surpass state-of-the-art systems across all publicly available datasets. Given the need for direction-of-arrival estimation, SELD generally relies on sufficient multi-channel audio clips. However, incorporating the AdapterBit, PSELDNets show more efficient adaptability to various scenarios using minimal multi-channel or even just monophonic audio clips, outperforming traditional fine-tuning approaches.