Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models (2306.04675v2)

Published 7 Jun 2023 in cs.LG, cs.CV, and stat.ML

Abstract: We systematically study a wide variety of generative models spanning semantically-diverse image datasets to understand and improve the feature extractors and metrics used to evaluate them. Using best practices in psychophysics, we measure human perception of image realism for generated samples by conducting the largest experiment evaluating generative models to date, and find that no existing metric strongly correlates with human evaluations. Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID. This discrepancy is not explained by diversity in generated samples, though one cause is over-reliance on Inception-V3. We address these flaws through a study of alternative self-supervised feature extractors, find that the semantic information encoded by individual networks strongly depends on their training procedure, and show that DINOv2-ViT-L/14 allows for much richer evaluation of generative models. Next, we investigate data memorization, and find that generative models do memorize training examples on simple, smaller datasets like CIFAR10, but not necessarily on more complex datasets like ImageNet. However, our experiments show that current metrics do not properly detect memorization: none in the literature is able to separate memorization from other phenomena such as underfitting or mode shrinkage. To facilitate further development of generative models and their evaluation we release all generated image datasets, human evaluation data, and a modular library to compute 17 common metrics for 9 different encoders at https://github.com/layer6ai-labs/dgm-eval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. George Stein (28 papers)
  2. Jesse C. Cresswell (39 papers)
  3. Rasa Hosseinzadeh (14 papers)
  4. Yi Sui (16 papers)
  5. Brendan Leigh Ross (15 papers)
  6. Valentin Villecroze (6 papers)
  7. Zhaoyan Liu (7 papers)
  8. Anthony L. Caterini (17 papers)
  9. J. Eric T. Taylor (1 paper)
  10. Gabriel Loaiza-Ganem (30 papers)
Citations (57)