Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Utilizing Character and Word Embeddings for Text Normalization with Sequence-to-Sequence Models (1809.01534v1)

Published 5 Sep 2018 in cs.CL, cs.LG, and stat.ML

Abstract: Text normalization is an important enabling technology for several NLP tasks. Recently, neural-network-based approaches have outperformed well-established models in this task. However, in languages other than English, there has been little exploration in this direction. Both the scarcity of annotated data and the complexity of the language increase the difficulty of the problem. To address these challenges, we use a sequence-to-sequence model with character-based attention, which in addition to its self-learned character embeddings, uses word embeddings pre-trained with an approach that also models subword information. This provides the neural model with access to more linguistic information especially suitable for text normalization, without large parallel corpora. We show that providing the model with word-level features bridges the gap for the neural network approach to achieve a state-of-the-art F1 score on a standard Arabic language correction shared task dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Daniel Watson (8 papers)
  2. Nasser Zalmout (8 papers)
  3. Nizar Habash (66 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.