Papers
Topics
Authors
Recent
Search
2000 character limit reached

On predictive density estimation with additional information

Published 22 Sep 2017 in math.ST, stat.ME, and stat.TH | (1709.07778v1)

Abstract: Based on independently distributed $X_1 \sim N_p(\theta_1, \sigma2_1 I_p)$ and $X_2 \sim N_p(\theta_2, \sigma2_2 I_p)$, we consider the efficiency of various predictive density estimators for $Y_1 \sim N_p(\theta_1, \sigma2_Y I_p)$, with the additional information $\theta_1 - \theta_2 \in A$ and known $\sigma2_1, \sigma2_2, \sigma2_Y$. We provide improvements on benchmark predictive densities such as plug-in, the maximum likelihood, and the minimum risk equivariant predictive densities. Dominance results are obtained for $\alpha-$divergence losses and include Bayesian improvements for reverse Kullback-Leibler loss, and Kullback-Leibler (KL) loss in the univariate case ($p=1$). An ensemble of techniques are exploited, including variance expansion (for KL loss), point estimation duality, and concave inequalities. Representations for Bayesian predictive densities, and in particular for $\hat{q}{\pi{U,A}}$ associated with a uniform prior for $\theta=(\theta_1, \theta_2)$ truncated to ${\theta \in \mathbb{R}{2p}: \theta_1 - \theta_2 \in A }$, are established and are used for the Bayesian dominance findings. Finally and interestingly, these Bayesian predictive densities also relate to skew-normal distributions, as well as new forms of such distributions.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.