Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting Word Embeddings to New Languages with Morphological and Phonological Subword Representations (1808.09500v1)

Published 28 Aug 2018 in cs.CL

Abstract: Much work in NLP has been for resource-rich languages, making generalization to new, less-resourced languages challenging. We present two approaches for improving generalization to low-resourced languages by adapting continuous word representations using linguistically motivated subword units: phonemes, morphemes and graphemes. Our method requires neither parallel corpora nor bilingual dictionaries and provides a significant gain in performance over previous methods relying on these resources. We demonstrate the effectiveness of our approaches on Named Entity Recognition for four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and Bengali are low resource languages, and also perform experiments on Machine Translation. Exploiting subwords with transfer learning gives us a boost of +15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Aditi Chaudhary (24 papers)
  2. Chunting Zhou (36 papers)
  3. Lori Levin (17 papers)
  4. Graham Neubig (342 papers)
  5. David R. Mortensen (40 papers)
  6. Jaime G. Carbonell (13 papers)
Citations (60)

Summary

We haven't generated a summary for this paper yet.