Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Polysynthetic Language Modelling (2005.05477v2)

Published 11 May 2020 in cs.CL

Abstract: Research in natural language processing commonly assumes that approaches that work well for English and and other widely-used languages are "language agnostic". In high-resource languages, especially those that are analytic, a common approach is to treat morphologically-distinct variants of a common root as completely independent word types. This assumes, that there are limited morphological inflections per root, and that the majority will appear in a large enough corpus, so that the model can adequately learn statistics about each form. Approaches like stemming, lemmatization, or subword segmentation are often used when either of those assumptions do not hold, particularly in the case of synthetic languages like Spanish or Russian that have more inflection than English. In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions. Yet, when considering all of the world's languages, Finnish and Turkish are closer to the average case. When we consider polysynthetic languages (those at the extreme of morphological complexity), approaches like stemming, lemmatization, or subword modelling may not suffice. These languages have very high numbers of hapax legomena, showing the need for appropriate morphological handling of words, without which it is not possible for a model to capture enough word statistics. We examine the current state-of-the-art in LLMling, machine translation, and text prediction for four polysynthetic languages: Guaran\'i, St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then propose a novel framework for LLMling that combines knowledge representations from finite-state morphological analyzers with Tensor Product Representations in order to enable neural LLMs capable of handling the full range of typologically variant languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (21)
  1. Lane Schwartz (7 papers)
  2. Francis Tyers (7 papers)
  3. Lori Levin (17 papers)
  4. Christo Kirov (16 papers)
  5. Patrick Littell (8 papers)
  6. Chi-kiu Lo (3 papers)
  7. Emily Prud'hommeaux (7 papers)
  8. Hyunji Hayley Park (4 papers)
  9. Kenneth Steimel (2 papers)
  10. Rebecca Knowles (3 papers)
  11. Jeffrey Micher (3 papers)
  12. Lonny Strunk (1 paper)
  13. Han Liu (340 papers)
  14. Coleman Haley (4 papers)
  15. Katherine J. Zhang (3 papers)
  16. Robbie Jimmerson (1 paper)
  17. Vasilisa Andriyanets (1 paper)
  18. Aldrian Obaja Muis (6 papers)
  19. Naoki Otani (8 papers)
  20. Jong Hyuk Park (4 papers)
Citations (23)