Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MCMC for Imbalanced Categorical Data (1605.05798v2)

Published 19 May 2016 in math.ST, cs.CC, stat.CO, and stat.TH

Abstract: Many modern applications collect highly imbalanced categorical data, with some categories relatively rare. Bayesian hierarchical models combat data sparsity by borrowing information, while also quantifying uncertainty. However, posterior computation presents a fundamental barrier to routine use; a single class of algorithms does not work well in all settings and practitioners waste time trying different types of MCMC approaches. This article was motivated by an application to quantitative advertising in which we encountered extremely poor computational performance for common data augmentation MCMC algorithms but obtained excellent performance for adaptive Metropolis. To obtain a deeper understanding of this behavior, we give strong theory results on computational complexity in an infinitely imbalanced asymptotic regime. Our results show computational complexity of Metropolis is logarithmic in sample size, while data augmentation is polynomial in sample size. The root cause of poor performance of data augmentation is a discrepancy between the rates at which the target density and MCMC step sizes concentrate. In general, MCMC algorithms that have a similar discrepancy will fail in large samples - a result with substantial practical impact.

Citations (11)

Summary

We haven't generated a summary for this paper yet.