Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Evaluation of Text-to-Image Models on a Multi-Task Benchmark (2211.12112v1)

Published 22 Nov 2022 in cs.CV, cs.AI, and cs.LG

Abstract: We provide a new multi-task benchmark for evaluating text-to-image models. We perform a human evaluation comparing the most common open-source (Stable Diffusion) and commercial (DALL-E 2) models. Twenty computer science AI graduate students evaluated the two models, on three tasks, at three difficulty levels, across ten prompts each, providing 3,600 ratings. Text-to-image generation has seen rapid progress to the point that many recent models have demonstrated their ability to create realistic high-resolution images for various prompts. However, current text-to-image methods and the broader body of research in vision-language understanding still struggle with intricate text prompts that contain many objects with multiple attributes and relationships. We introduce a new text-to-image benchmark that contains a suite of thirty-two tasks over multiple applications that capture a model's ability to handle different features of a text prompt. For example, asking a model to generate a varying number of the same object to measure its ability to count or providing a text prompt with several objects that each have a different attribute to identify its ability to match objects and attributes correctly. Rather than subjectively evaluating text-to-image results on a set of prompts, our new multi-task benchmark consists of challenge tasks at three difficulty levels (easy, medium, and hard) and human ratings for each generated image.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Vitali Petsiuk (6 papers)
  2. Alexander E. Siemenn (7 papers)
  3. Saisamrit Surbehera (2 papers)
  4. Zad Chin (3 papers)
  5. Keith Tyser (4 papers)
  6. Gregory Hunter (2 papers)
  7. Arvind Raghavan (2 papers)
  8. Yann Hicke (10 papers)
  9. Bryan A. Plummer (64 papers)
  10. Ori Kerret (1 paper)
  11. Tonio Buonassisi (51 papers)
  12. Kate Saenko (178 papers)
  13. Armando Solar-Lezama (65 papers)
  14. Iddo Drori (34 papers)
Citations (29)