Prior selection for the precision parameter of Dirichlet Process Mixtures (2502.00864v2)
Abstract: Consider a Dirichlet process mixture model (DPM) with random precision parameter $\alpha$, inducing $K_n$ clusters over $n$ observations through its latent random partition. Our goal is to specify the prior distribution $p\left(\alpha\mid\boldsymbol{\eta}\right)$, including its fixed parameter vector $\boldsymbol{\eta}$, in a way that is meaningful. Existing approaches can be broadly categorised into three groups. Those in the first group depend on the sample size $n$, and often rely on the linkage between $p\left(\alpha\mid\boldsymbol{\eta}\right)$ and $p\left(K_n\right)$ to draw conclusions on how to best choose $\boldsymbol{\eta}$ to reflect one's prior knowledge of $K_{n}$; we call them sample-size-dependent. Those in the second and third group consist instead of using quasi-degenerate or improper priors, respectively. In this article, we show how all three methods have limitations, especially for large $n$. Then we propose an alternative methodology which does not depend on $K_n$ or on the size of the available sample, but rather on the relationship between the largest stick lengths in the stick-breaking construction of the DPM; and which reflects those prior beliefs in $p\left(\alpha\mid\boldsymbol{\eta}\right)$. We conclude with an example where existing sample-size-dependent approaches fail, while our sample-size-independent approach continues to be feasible.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run paper prompts using GPT-5.