Papers
Topics
Authors
Recent
Search
2000 character limit reached

Testing the limits of natural language models for predicting human language judgments

Published 7 Apr 2022 in cs.CL, cs.AI, and q-bio.NC | (2204.03592v3)

Abstract: Neural network LLMs can serve as computational hypotheses about how humans process language. We compared the model-human consistency of diverse LLMs using a novel experimental approach: controversial sentence pairs. For each controversial sentence pair, two LLMs disagree about which sentence is more likely to occur in natural text. Considering nine LLMs (including n-gram, recurrent neural networks, and transformer models), we created hundreds of such controversial sentence pairs by either selecting sentences from a corpus or synthetically optimizing sentence pairs to be highly controversial. Human subjects then provided judgments indicating for each pair which of the two sentences is more likely. Controversial sentence pairs proved highly effective at revealing model failures and identifying models that aligned most closely with human judgments. The most human-consistent model tested was GPT-2, although experiments also revealed significant shortcomings of its alignment with human perception.

Citations (12)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.