Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Morphological Tree Tokenizer (2406.15245v1)

Published 21 Jun 2024 in cs.CL and cs.LG

Abstract: As a cornerstone in LLMing, tokenization involves segmenting text inputs into pre-defined atomic units. Conventional statistical tokenizers often disrupt constituent boundaries within words, thereby corrupting semantic information. To address this drawback, we introduce morphological structure guidance to tokenization and propose a deep model to induce character-level structures of words. Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named $\textit{MorphOverriding}$ to ensure the indecomposability of morphemes. By training the model with self-supervised objectives, our method is capable of inducing character-level structures that align with morphological rules without annotated training data. Based on the induced structures, our algorithm tokenizes words through vocabulary matching in a top-down manner. Empirical results indicate that the proposed method effectively retains complete morphemes and outperforms widely adopted methods such as BPE and WordPiece on both morphological segmentation tasks and LLMing tasks. The code will be released later.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qingyang Zhu (4 papers)
  2. Xiang Hu (25 papers)
  3. Pengyu Ji (2 papers)
  4. Wei Wu (481 papers)
  5. Kewei Tu (74 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com