An error analysis of generative adversarial networks for learning distributions (2105.13010v5)
Abstract: This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through H\"older classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have H\"older densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.
- Jian Huang (165 papers)
- Yuling Jiao (81 papers)
- Zhen Li (334 papers)
- Shiao Liu (3 papers)
- Yang Wang (672 papers)
- Yunfei Yang (26 papers)