Evaluating Large Language Models on Controlled Generation Tasks (2310.14542v1)
Abstract: While recent studies have looked into the abilities of LLMs in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of LLMs on generation tasks. We present an extensive analysis of various benchmarks including a sentence planning benchmark with different granularities. After comparing LLMs against state-of-the-start finetuned smaller models, we present a spectrum showing LLMs falling behind, are comparable, or exceed the ability of smaller models. We conclude that LLMs struggle at meeting fine-grained hard constraints.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.