Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Nonparametric Density Estimation under Adversarial Losses (1805.08836v2)

Published 22 May 2018 in math.ST, cs.IT, math.IT, stat.ML, and stat.TH

Abstract: We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which, besides classical $\mathcal{L}p$ losses, includes maximum mean discrepancy (MMD), Wasserstein distance, and total variation distance. These losses are closely related to the losses encoded by discriminator networks in generative adversarial networks (GANs). In a general framework, we study how the choice of loss and the assumed smoothness of the underlying density together determine the minimax rate. We also discuss implications for training GANs based on deep ReLU networks, and more general connections to learning implicit generative models in a minimax statistical sense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shashank Singh (33 papers)
  2. Ananya Uppal (4 papers)
  3. Boyue Li (8 papers)
  4. Chun-Liang Li (60 papers)
  5. Manzil Zaheer (89 papers)
  6. Barnabás Póczos (39 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.