Inconsistent Tokenizations Cause Language Models to be Perplexed by Japanese Grammar (2505.19599v1)
Abstract: Typical methods for evaluating the performance of LLMs evaluate their ability to answer questions accurately. These evaluation metrics are acceptable for determining the extent to which LLMs can understand and reason about text in a general sense, but fail to capture nuanced capabilities, such as the ability of LLMs to recognize and obey rare grammar points, particularly in languages other than English. We measure the perplexity of LLMs when confronted with the "first person psych predicate restriction" grammar point in Japanese. Weblab is the only tested open source model in the 7-10B parameter range which consistently assigns higher perplexity to ungrammatical psych predicate sentences than grammatical ones. We give evidence that Weblab's uniformly bad tokenization is a possible root cause for its good performance, and show that Llama 3's perplexity on grammatical psych predicate sentences can be reduced by orders of magnitude (28x difference) by restricting test sentences to those with uniformly well-behaved tokenizations. We show in further experiments on machine translation tasks that LLMs will use alternative grammar patterns in order to produce grammatical sentences when tokenization issues prevent the most natural sentence from being output.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.