Adaptive Gaussian Mixture MH
- Fully Adaptive Gaussian Mixture MH (AGM-MH) is an adaptive MCMC method that recursively updates a mixture of Gaussian proposals to approximate complex target distributions.
- It dynamically adjusts weights, means, and covariances using the full sample history, lowering autocorrelation and enhancing mixing for multi-modal and high-dimensional data.
- Empirical results show that AGM-MH achieves lower estimation error and faster convergence compared to traditional nonadaptive Metropolis–Hastings methods, forming a basis for advanced variants like AIMM.
Fully Adaptive Gaussian Mixture Metropolis–Hastings (AGM-MH) is a class of independent Metropolis–Hastings algorithms employing a proposal distribution modeled as a mixture of Gaussian components. Its key innovation is the simultaneous, recursive adaptation of all mixture parameters (weights, means, and covariances) using the entire sample history, with the explicit goal of improving efficiency for multi-modal and high-dimensional target distributions. The proposal is dynamically refined to progressively approximate the target density, thus lowering autocorrelation and enhancing mixing. AGM-MH is foundational in adaptive MCMC and serves as the underlying framework for algorithms such as Adaptive Incremental Mixture MCMC (AIMM) (Luengo et al., 2012, Maire et al., 2016).
1. Algorithmic Structure and Proposal Design
At each iteration , the AGM-MH proposal is a -component Gaussian mixture,
where , , and are the adaptive weights, means, and covariance matrices, respectively. The proposal is independent of the current chain position, and each new sample is accepted with probability
where is the (unnormalized) target density (Luengo et al., 2012).
In the AIMM framework, the proposal generalizes dynamically:
with a defensive Gaussian or broad prior component, Gaussian increments, and both the number of components () and their parameters determined online (Maire et al., 2016).
2. Parameter Adaptation and Recursive Formulas
Parameter updates in AGM-MH are performed recursively using accepted samples:
- At each step, assign the newly accepted sample to the closest component .
- Update the mean:
where is the number of samples assigned to component .
- Update the covariance:
with a ridge parameter ensuring positive definiteness.
- Update weights:
For AIMM, a new Gaussian component is added when local discrepancy exceeds a threshold . The new component’s mean is the current proposal , its covariance is estimated from the neighborhood of (using Mahalanobis distance), and its unnormalized weight is set by with (Luengo et al., 2012, Maire et al., 2016).
3. Initialization and Practical Guidelines
Effective performance requires choices for the number of mixture components, their initialization, adaptation timescales, and regularization:
- Component number : For fixed- AGM-MH, (or proportional to the anticipated number of modes); in AIMM, grows adaptively with added components.
- Initial means : Distributed around expected modes if prior information is available; otherwise, scattered randomly over a large support.
- Initial covariances : Typically with large to ensure global exploration.
- Training length : Sufficiently long, e.g., iterations, so each component accrues samples before adaptation stabilizes.
- Stopping time : Either the total budget or earlier, with vanishing adaptation guaranteeing ergodicity.
- Ridge parameter : Small (e.g., –) to prevent degeneracy.
- AIMM-specific tuning: Discrepancy threshold , neighborhood scale , and unnormalized weight exponent (Luengo et al., 2012, Maire et al., 2016).
Unused components tend to and may be pruned to reduce computational cost in fixed- settings.
4. Convergence and Ergodicity
AGM-MH, both in its fixed and incremental forms, is designed to be ergodic with respect to . The adaptation mechanism satisfies the "diminishing adaptation" criterion, as updates scale as and the Law of Large Numbers ensures these updates vanish over time. Containment is enforced by the ridge parameter in the covariance matrices, maintaining strictly positive-definite . Standard results [Roberts & Rosenthal 2007] guarantee ergodicity provided adaptation vanishes and the proposal remains well-behaved.
AIMM generalizes this result, with explicit theorems for both unbounded and compact parameter spaces. Under conditions such as lower bounds on covariance determinants, subexponential tails for , and an upper bound on component number in compact spaces, both diminishing adaptation and containment hold, ensuring convergence in total variation to the target (Luengo et al., 2012, Maire et al., 2016).
5. Computational Complexity
The per-iteration cost for fixed- AGM-MH is , dominated by:
- Sampling: for generating a mixture sample.
- Evaluation: per density evaluation.
- Component search: for finding the closest mean.
- Covariance update: per update (Luengo et al., 2012).
Empirically, as adaptation proceeds, many decay, and pruning or "moving window" techniques (especially in AIMM, via f-AIMM) can further limit computational overhead (Maire et al., 2016).
6. Numerical Performance and Comparative Results
Empirical evaluation demonstrates that AGM-MH achieves substantially lower sample autocorrelation and more accurate estimation relative to nonadaptive Metropolis–Hastings using the same initial proposal, with only mild extra computational cost. Representative findings include:
- One-dimensional bimodal: With ,
- Nonadaptive MH autocorrelation:
- AGM-MH autocorrelation:
- Mean-square error (MSE) on mean:
- Final component properties: means , variances , weights
- One-dimensional -component mixtures:
- MSEs for normalizing constant estimation decay with , e.g., (), ()
- AGM-MH autocorrelations: $0.13–0.16$ vs $0.46–0.81$ for nonadaptive MH
- Acceptance rates improve after adaptation
- Two-dimensional mixtures:
- With , parameters converge quickly to true modes and covariances
- With , only those near modes adapt, unused components become inactive
AIMM and its variant f-AIMM demonstrate competitive or superior performance to adaptive random-walk Metropolis and fixed-mixture AGM-MH for high-dimensional, multimodal, or heavy-tailed targets, with additional algorithmic flexibility in tuning adaptation rates and controlling proposal complexity (Luengo et al., 2012, Maire et al., 2016).
7. Extensions and Theoretical Variants
AGM-MH constitutes a foundational class upon which algorithms such as AIMM are constructed. In AIMM, the number of mixture components is not fixed but augments adaptively in response to local coverage deficiencies, as signaled by large local discrepancy. Efficient local covariance estimation and online weight updates ensure the proposal remains flexible and can concentrate on relevant regions of the target.
Theoretical guarantees extend to unbounded or compact state spaces under routine conditions. Heuristic and empirical strategies—such as capping component numbers ("moving window"), adapting discrepancy thresholds, and rescaling weights—are documented for practical efficiency. These approaches further enable the linearization of computational cost with increasing component numbers (Maire et al., 2016).
For detailed algorithms, theoretical proofs, and experimental protocols, see (Luengo et al., 2012) ("Fully Adaptive Gaussian Mixture Metropolis-Hastings Algorithm") and (Maire et al., 2016) ("Adaptive Incremental Mixture Markov chain Monte Carlo").