Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An error analysis of generative adversarial networks for learning distributions (2105.13010v5)

Published 27 May 2021 in cs.LG, math.ST, stat.ML, and stat.TH

Abstract: This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through H\"older classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have H\"older densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jian Huang (165 papers)
  2. Yuling Jiao (81 papers)
  3. Zhen Li (334 papers)
  4. Shiao Liu (3 papers)
  5. Yang Wang (672 papers)
  6. Yunfei Yang (26 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.