Acoustic-to-articulatory inversion for dysarthric speech: Are pre-trained self-supervised representations favorable? (2309.01108v4)
Abstract: Acoustic-to-articulatory inversion (AAI) involves mapping from the acoustic to the articulatory space. Signal-processing features like the MFCCs, have been widely used for the AAI task. For subjects with dysarthric speech, AAI is challenging because of an imprecise and indistinct pronunciation. In this work, we perform AAI for dysarthric speech using representations from pre-trained self-supervised learning (SSL) models. We demonstrate the impact of different pre-trained features on this challenging AAI task, at low-resource conditions. In addition, we also condition x-vectors to the extracted SSL features to train a BLSTM network. In the seen case, we experiment with three AAI training schemes (subject-specific, pooled, and fine-tuned). The results, consistent across training schemes, reveal that DeCoAR, in the fine-tuned scheme, achieves a relative improvement of the Pearson Correlation Coefficient (CC) by ~1.81% and ~4.56% for healthy controls and patients, respectively, over MFCCs. We observe similar average trends for different SSL features in the unseen case. Overall, SSL networks like wav2vec, APC, and DeCoAR, trained with feature reconstruction or future timestep prediction tasks, perform well in predicting dysarthric articulatory trajectories.
- “Amyotrophic lateral sclerosis,” The lancet, vol. 377, no. 9769, pp. 942–955, 2011.
- “Physiologic deficits in the orofacial system underlying dysarthria in amyotrophic lateral sclerosis,” Journal of Speech, Language, and Hearing Research, vol. 37, no. 1, pp. 28–37, 1994.
- “Comparison of speech tasks for automatic classification of patients with amyotrophic lateral sclerosis and healthy subjects,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6014–6018.
- “Speech deterioration in amyotrophic lateral sclerosis: A case study,” Journal of Speech, Language, and Hearing Research, vol. 34, no. 6, pp. 1269–1275, 1991.
- A guide to clinical assessment and professional report writing in speech-language pathology, Delmar, 2012.
- “A rough guide to the acoustic-to-articulatory inversion of speech,” in 6th Hellenic European Conference of Computer Mathematics and its Applications, HERCMA-2003. Citeseer, 2003.
- “Low resource acoustic-to-articulatory inversion using bi-directional long short term memory.,” in Interspeech, 2018, pp. 3122–3126.
- “Acoustic-to-articulatory inversion for dysarthric speech by using cross-corpus acoustic-articulatory data,” in 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 6458–6462.
- “Multi-corpus acoustic-to-articulatory speech inversion.,” in Interspeech, 2019, pp. 859–863.
- “Improved acoustic-to-articulatory inversion using representations from pretrained self-supervised learning models,” in 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5.
- “A generalized smoothness criterion for acoustic-to-articulatory inversion,” The Journal of the Acoustical Society of America, vol. 128, no. 4, pp. 2162–2172, 2010.
- “vq-wav2vec: Self-supervised learning of discrete speech representations,” in Proc. of ICLR, 202a.
- “Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders,” in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 6419–6423.
- “An unsupervised autoregressive model for speech representation learning,” in Interspeech, 2019.
- “Non-autoregressive predictive coding for learning speech representations from local dependencies,” arXiv preprint arXiv:2011.00406, 2020.
- “Deep contextualized acoustic representations for semi-supervised speech recognition,” in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 6429–6433.
- “wav2vec: Unsupervised pre-training for speech recognition,” in Interspeech, 2019.
- “Tera: Self-supervised learning of transformer encoder representation for speech,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2351–2366, 2021.
- “The impact of speaking rate on acoustic-to-articulatory inversion,” Computer Speech & Language, vol. 59, pp. 75–90, 2020.
- “Closed-set speaker conditioned acoustic-to-articulatory inversion using bi-directional long short term memory network,” The Journal of the Acoustical Society of America, vol. 147, no. 2, pp. EL171–EL176, 2020.
- “X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), 2018, pp. 5329–5333.
- “The torgo database of acoustic and articulatory speech from speakers with dysarthria,” Language Resources and Evaluation, vol. 46, pp. 523–541, 2012.
- “Speech database development at mit: Timit and beyond,” Speech communication, vol. 9, no. 4, pp. 351–356, 1990.
- Katrin Kirchho, “Robust speech recognition using articulatory information,” PhD esis, University of Bielefeld, Bielefeld, Germany, 1999.
- “Acoustic to articulatory mapping with deep neural network,” Multimedia Tools and Applications, vol. 74, pp. 9889–9907, 2015.
- “Estimating articulatory movements in speech production with transformer networks,” in Proc. of Interspeech, 2021, p. 1154–1158.
- “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding, 2011.
- “Voxceleb: Large-scale speaker verification in the wild,” Computer Speech & Language, vol. 60, pp. 101027, 2020.
- Laurens Van der Maaten and Geoffrey Hinton, “Visualizing data using t-sne.,” Journal of machine learning research, vol. 9, no. 11, 2008.
- “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- “Speaker conditioned acoustic-to-articulatory inversion using x-vectors,” in Interspeech, 2020, pp. 1376–1380.