Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Functional estimation in log-concave location families (2108.00263v1)

Published 31 Jul 2021 in math.ST and stat.TH

Abstract: Let ${P_{\theta}:\theta \in {\mathbb R}d}$ be a log-concave location family with $P_{\theta}(dx)=e{-V(x-\theta)}dx,$ where $V:{\mathbb R}d\mapsto {\mathbb R}$ is a known convex function and let $X_1,\dots, X_n$ be i.i.d. r.v. sampled from distribution $P_{\theta}$ with an unknown location parameter $\theta.$ The goal is to estimate the value $f(\theta)$ of a smooth functional $f:{\mathbb R}d\mapsto {\mathbb R}$ based on observations $X_1,\dots, X_n.$ In the case when $V$ is sufficiently smooth and $f$ is a functional from a ball in a H\"older space $Cs,$ we develop estimators of $f(\theta)$ with minimax optimal error rates measured by the $L_2({\mathbb P}_{\theta})$-distance as well as by more general Orlicz norm distances. Moreover, we show that if $d\leq n{\alpha}$ and $s>\frac{1}{1-\alpha},$ then the resulting estimators are asymptotically efficient in H\'ajek-LeCam sense with the convergence rate $\sqrt{n}.$ This generalizes earlier results on estimation of smooth functionals in Gaussian shift models. The estimators have the form $f_k(\hat \theta),$ where $\hat \theta$ is the maximum likelihood estimator and $f_k: {\mathbb R}d\mapsto {\mathbb R}$ (with $k$ depending on $s$) are functionals defined in terms of $f$ and designed to provide a higher order bias reduction in functional estimation problem. The method of bias reduction is based on iterative parametric bootstrap and it has been successfully used before in the case of Gaussian models.

Summary

We haven't generated a summary for this paper yet.