Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation Metrics for Conditional Image Generation (2004.12361v2)

Published 26 Apr 2020 in cs.CV, cs.LG, and eess.IV

Abstract: We present two new metrics for evaluating generative models in the class-conditional image generation setting. These metrics are obtained by generalizing the two most popular unconditional metrics: the Inception Score (IS) and the Fre'chet Inception Distance (FID). A theoretical analysis shows the motivation behind each proposed metric and links the novel metrics to their unconditional counterparts. The link takes the form of a product in the case of IS or an upper bound in the FID case. We provide an extensive empirical evaluation, comparing the metrics to their unconditional variants and to other metrics, and utilize them to analyze existing generative models, thus providing additional insights about their performance, from unlearned classes to mode collapse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yaniv Benny (7 papers)
  2. Tomer Galanti (31 papers)
  3. Sagie Benaim (39 papers)
  4. Lior Wolf (217 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.