Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Normalizing Text using Language Modelling based on Phonetics and String Similarity (2006.14116v1)

Published 25 Jun 2020 in cs.CL

Abstract: Social media networks and chatting platforms often use an informal version of natural text. Adversarial spelling attacks also tend to alter the input text by modifying the characters in the text. Normalizing these texts is an essential step for various applications like language translation and text to speech synthesis where the models are trained over clean regular English language. We propose a new robust model to perform text normalization. Our system uses the BERT LLM to predict the masked words that correspond to the unnormalized words. We propose two unique masking strategies that try to replace the unnormalized words in the text with their root form using a unique score based on phonetic and string similarity metrics.We use human-centric evaluations where volunteers were asked to rank the normalized text. Our strategies yield an accuracy of 86.7% and 83.2% which indicates the effectiveness of our system in dealing with text normalization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fenil Doshi (1 paper)
  2. Jimit Gandhi (1 paper)
  3. Deep Gosalia (1 paper)
  4. Sudhir Bagul (4 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.