2000 character limit reached
Correcting Automated and Manual Speech Transcription Errors using Warped Language Models (2103.14580v1)
Published 26 Mar 2021 in cs.CL
Abstract: Masked LLMs have revolutionized natural language processing systems in the past few years. A recently introduced generalization of masked LLMs called warped LLMs are trained to be more robust to the types of errors that appear in automatic or manual transcriptions of spoken language by exposing the LLM to the same types of errors during training. In this work we propose a novel approach that takes advantage of the robustness of warped LLMs to transcription noise for correcting transcriptions of spoken language. We show that our proposed approach is able to achieve up to 10% reduction in word error rates of both automatic and manual transcriptions of spoken language.