Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hypothesis Testing for Parsimonious Gaussian Mixture Models (1405.0377v1)

Published 2 May 2014 in stat.ME and stat.CO

Abstract: Gaussian mixture models with eigen-decomposed covariance structures make up the most popular family of mixture models for clustering and classification, i.e., the Gaussian parsimonious clustering models (GPCM). Although the GPCM family has been used for almost 20 years, selecting the best member of the family in a given situation remains a troublesome problem. Likelihood ratio tests are developed to tackle this problems. These likelihood ratio tests use the heteroscedastic model under the alternative hypothesis but provide much more flexibility and real-world applicability than previous approaches that compare the homoscedastic Gaussian mixture versus the heteroscedastic one. Along the way, a novel maximum likelihood estimation procedure is developed for two members of the GPCM family. Simulations show that the $\chi2$ reference distribution gives reasonable approximation for the LR statistics only when the sample size is considerable and when the mixture components are well separated; accordingly, following Lo (2008), a parametric bootstrap is adopted. Furthermore, by generalizing the idea of Greselin and Punzo (2013) to the clustering context, a closed testing procedure, having the defined likelihood ratio tests as local tests, is introduced to assess a unique model in the general family. The advantages of this likelihood ratio testing procedure are illustrated via an application to the well-known Iris data set.

Summary

We haven't generated a summary for this paper yet.