Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Novel Measure to Evaluate Generative Adversarial Networks Based on Direct Analysis of Generated Images (2002.12345v4)

Published 27 Feb 2020 in cs.CV, cs.LG, and eess.IV

Abstract: The Generative Adversarial Network (GAN) is a state-of-the-art technique in the field of deep learning. A number of papers address the theory and applications of GANs in various fields of image processing. Fewer studies, however, have directly evaluated GAN outputs. Those that have been conducted focused on using classification performance, e.g., Inception Score (IS) and statistical metrics, e.g., Fr\'echet Inception Distance (FID). Here, we consider a fundamental way to evaluate GANs by directly analyzing the images they generate, instead of using them as inputs to other classifiers. We characterize the performance of a GAN as an image generator according to three aspects: 1) Creativity: non-duplication of the real images. 2) Inheritance: generated images should have the same style, which retains key features of the real images. 3) Diversity: generated images are different from each other. A GAN should not generate a few different images repeatedly. Based on the three aspects of ideal GANs, we have designed the Likeness Score (LS) to evaluate GAN performance, and have applied it to evaluate several typical GANs. We compared our proposed measure with two commonly used GAN evaluation methods: IS and FID, and four additional measures. Furthermore, we discuss how these evaluations could help us deepen our understanding of GANs and improve their performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Shuyue Guan (20 papers)
  2. Murray Loew (18 papers)
Citations (12)