Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language models are better than humans at next-token prediction (2212.11281v2)

Published 21 Dec 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Current LLMs are considered to have sub-human capabilities at natural language tasks like question-answering or writing code. However, LLMs are not trained to perform well at these tasks, they are trained to accurately predict the next token given previous tokes in tokenized text. It is not clear whether LLMs are better or worse than humans at next token prediction. To try to answer this question, we performed two distinct experiments to directly compare humans and LLMs on this front: one measuring top-1 accuracy and the other measuring perplexity. In both experiments, we find humans to be consistently \emph{worse} than even relatively small LLMs like GPT3-Ada at next-token prediction.

Citations (8)

Summary

We haven't generated a summary for this paper yet.