Contextualized Spoken Word Representations from Convolutional Autoencoders (2007.02880v2)
Abstract: A lot of work has been done to build text-based LLMs for performing different NLP tasks, but not much research has been done in the case of audio-based LLMs. This paper proposes a Convolutional Autoencoder based neural architecture to model syntactically and semantically adequate contextualized representations of varying length spoken words. The use of such representations can not only lead to great advances in the audio-based NLP tasks but can also curtail the loss of information like tone, expression, accent, etc while converting speech to text to perform these tasks. The performance of the proposed model is validated by (1) examining the generated vector space, and (2) evaluating its performance on three benchmark datasets for measuring word similarities, against existing widely used text-based LLMs that are trained on the transcriptions. The proposed model was able to demonstrate its robustness when compared to the other two language-based models.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.