Papers
Topics
Authors
Recent
2000 character limit reached

Unigram-Normalized Perplexity as a Language Model Performance Measure with Different Vocabulary Sizes

Published 26 Nov 2020 in cs.CL and cs.LG | (2011.13220v1)

Abstract: Although Perplexity is a widely used performance metric for LLMs, the values are highly dependent upon the number of words in the corpus and is useful to compare performance of the same corpus only. In this paper, we propose a new metric that can be used to evaluate LLM performance with different vocabulary sizes. The proposed unigram-normalized Perplexity actually presents the performance improvement of the LLMs from that of simple unigram model, and is robust on the vocabulary size. Both theoretical analysis and computational experiments are reported.

Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.