Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unigram-Normalized Perplexity as a Language Model Performance Measure with Different Vocabulary Sizes (2011.13220v1)

Published 26 Nov 2020 in cs.CL and cs.LG

Abstract: Although Perplexity is a widely used performance metric for LLMs, the values are highly dependent upon the number of words in the corpus and is useful to compare performance of the same corpus only. In this paper, we propose a new metric that can be used to evaluate LLM performance with different vocabulary sizes. The proposed unigram-normalized Perplexity actually presents the performance improvement of the LLMs from that of simple unigram model, and is robust on the vocabulary size. Both theoretical analysis and computational experiments are reported.

Citations (4)

Summary

We haven't generated a summary for this paper yet.