Adaptive Posterior Convergence Rates in Bayesian Density Deconvolution with Supersmooth Errors (1308.5427v2)
Abstract: Bayesian density deconvolution using nonparametric prior distributions is a useful alternative to the frequentist kernel based deconvolution estimators due to its potentially wide range of applicability, straightforward uncertainty quantification and generalizability to more sophisticated models. This article is the first substantive effort to theoretically quantify the behavior of the posterior in this recent line of research. In particular, assuming a known supersmooth error density, a Dirichlet process mixture of Normals on the true density leads to a posterior convergence rate same as the minimax rate $(\log n){-\eta/\beta}$ adaptively over the smoothness $\eta$ of an appropriate H\"{o}lder space of densities, where $\beta$ is the degree of smoothness of the error distribution. Our main contribution is achieving adaptive minimax rates with respect to the $L_p$ norm for $2 \leq p \leq \infty$ under mild regularity conditions on the true density. En route, we develop tight concentration bounds for a class of kernel based deconvolution estimators which might be of independent interest.