Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training (2107.04229v2)

Published 9 Jul 2021 in cs.SD, cs.LG, and eess.AS

Abstract: Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time.

Citations (5)

Summary

We haven't generated a summary for this paper yet.