Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-task pre-training for on-device acoustic scene classification (1910.09935v2)

Published 22 Oct 2019 in cs.SD, cs.LG, and eess.AS

Abstract: Acoustic scene classification (ASC) and acoustic event detection (AED) are different but related tasks. Acoustic events can provide useful information for recognizing acoustic scenes. However, most of the datasets are provided without either the acoustic event or scene labels. To utilize the acoustic event information to improve the performance of ASC tasks, we present the cross-task pre-training mechanism which utilizes acoustic event information from the pre-trained AED model for ASC tasks. On the other hand, most of the models were designed and implemented on platforms with rich computing resources, and the on-device applications were limited. To solve this problem, we use model distillation method to compress our cross-task model to enable on-device acoustic scene classification. In this paper, the cross-task models and their student model were trained and evaluated on two datasets: TAU Urban Acoustic Scenes 2019 dataset and TUT Acoustic Scenes 2017 dataset. Results have shown that cross-task pre-training mechanism can significantly improve the performance of ASC tasks. The performance of our best model improved relatively 9.5% in the TAU Urban Acoustic Scenes 2019 dataset, and also improved 10% in the TUT Acoustic Scenes 2017 dataset compared with the official baseline. At the same time, the performance of the student model is much better than that of the model without teachers.

Citations (1)

Summary

We haven't generated a summary for this paper yet.