Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global rates of convergence in log-concave density estimation (1404.2298v2)

Published 8 Apr 2014 in math.ST and stat.TH

Abstract: The estimation of a log-concave density on $\mathbb{R}d$ represents a central problem in the area of nonparametric inference under shape constraints. In this paper, we study the performance of log-concave density estimators with respect to global loss functions, and adopt a minimax approach. We first show that no statistical procedure based on a sample of size $n$ can estimate a log-concave density with respect to the squared Hellinger loss function with supremum risk smaller than order $n{-4/5}$, when $d=1$, and order $n{-2/(d+1)}$ when $d \geq 2$. In particular, this reveals a sense in which, when $d \geq 3$, log-concave density estimation is fundamentally more challenging than the estimation of a density with two bounded derivatives (a problem to which it has been compared). Second, we show that for $d \leq 3$, the Hellinger $\epsilon$-bracketing entropy of a class of log-concave densities with small mean and covariance matrix close to the identity grows like $\max{\epsilon{-d/2},\epsilon{-(d-1)}}$ (up to a logarithmic factor when $d=2$). This enables us to prove that when $d \leq 3$ the log-concave maximum likelihood estimator achieves the minimax optimal rate (up to logarithmic factors when $d = 2,3$) with respect to squared Hellinger loss.

Summary

We haven't generated a summary for this paper yet.