LMentry: A Language Model Benchmark of Elementary Language Tasks (2211.02069v2)
Abstract: As the performance of LLMs rapidly improves, benchmarks are getting larger and more complex as well. We present LMentry, a benchmark that avoids this "arms race" by focusing on a compact set of tasks that are trivial to humans, e.g. writing a sentence containing a specific word, identifying which words in a list belong to a specific category, or choosing which of two words is longer. LMentry is specifically designed to provide quick and interpretable insights into the capabilities and robustness of LLMs. Our experiments reveal a wide variety of failure cases that, while immediately obvious to humans, pose a considerable challenge for LLMs, including OpenAI's latest 175B-parameter instruction-tuned model, TextDavinci002. LMentry complements contemporary evaluation approaches of LLMs, providing a quick, automatic, and easy-to-run "unit test", without resorting to large benchmark suites of complex tasks.
Collections
Sign up for free to add this paper to one or more collections.