Limits of Detecting Text Generated by Large-Scale Language Models (2002.03438v1)
Abstract: Some consider large-scale LLMs that can generate long and coherent pieces of text as dangerous, since they may be used in misinformation campaigns. Here we formulate large-scale LLM output detection as a hypothesis testing problem to classify text as genuine or generated. We show that error exponents for particular LLMs are bounded in terms of their perplexity, a standard measure of language generation performance. Under the assumption that human language is stationary and ergodic, the formulation is extended from considering specific LLMs to considering maximum likelihood LLMs, among the class of k-order Markov approximations; error probabilities are characterized. Some discussion of incorporating semantic side information is also given.