Cochleagram-based Noise Adapted Speaker Identification System for Distorted Speech (2508.21347v1)
Abstract: Speaker Identification refers to the process of identifying a person using one's voice from a collection of known speakers. Environmental noise, reverberation and distortion make the task of automatic speaker identification challenging as extracted features get degraded thus affecting the performance of the speaker identification (SID) system. This paper proposes a robust noise adapted SID system under noisy, mismatched, reverberated and distorted environments. This method utilizes an auditory features called cochleagram to extract speaker characteristics and thus identify the speaker. A $128$ channel gammatone filterbank with a frequency range from $50$ to $8000$ Hz was used to generate 2-D cochleagrams. Wideband as well as narrowband noises were used along with clean speech to obtain noisy cochleagrams at various levels of signal to noise ratio (SNR). Both clean and noisy cochleagrams of only $-5$ dB SNR were then fed into a convolutional neural network (CNN) to build a speaker model in order to perform SID which is referred as noise adapted speaker model (NASM). The NASM was trained using a certain noise and then was evaluated using clean and various types of noises. Moreover, the robustness of the proposed system was tested using reverberated as well as distorted test data. Performance of the proposed system showed a measurable accuracy improvement over existing neurogram based SID system.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.