Latin Hypercube Sampling
- Latin Hypercube Sampling is a stratified sampling method that partitions each variable into equal-probability intervals to ensure complete coverage and reduce variance.
- The algorithm uses random permutations and uniform jitter in each dimension, effectively capturing the full range of univariate marginals and aiding effective numerical integration.
- Extensions such as copula-based, constrained, and adaptive LHS methods enhance its applicability to dependent inputs, non-rectangular domains, and high-dimensional simulation challenges.
Latin Hypercube Sampling (LHS) is a stratified sampling design widely utilized for uncertainty quantification, numerical integration, simulation-based optimization, and experiment design in high-dimensional spaces. LHS strategically ensures that univariate marginals are perfectly stratified—each variable's range is partitioned into equal-probability intervals, so every interval is represented in the sample—while randomizing the joint multivariate allocation. This yields improved univariate coverage, variance reduction, and space-filling properties, especially valuable when computational budgets constrain the number of simulation runs.
1. Formal Definition and Algorithmic Construction
Consider samples %%%%1%%%% in . Standard LHS is constructed as follows:
- Marginal stratification: For each dimension , subdivide into intervals for .
- Random permutation: For each , draw a random permutation of .
- Uniform jitter: For each and each , draw .
- Sample placement: Set
This ensures that, for each coordinate , the set is a random permutation with one point in each stratum.
The defining property is that in every one-dimensional projection, all strata are hit exactly once, but the joint design is not stratified beyond the imposed marginal constraints; high-dimensional uniformity is attained only in expectation. Standard implementations exist in R (lhs), Python (pyDOE, expandLHS), and MATLAB (lhsdesign) (Kucherenko et al., 2015, Boschini et al., 29 Aug 2025).
2. Variance Reduction and Central Limit Results
Given and the integration task , the LHS estimator is
Variance analysis via ANOVA/functional decomposition of yields (Kucherenko et al., 2015, Hakimi, 10 Feb 2025):
and, under sufficient smoothness, the decay rate is (main-effect dominated), whereas plain Monte Carlo is . A Central Limit Theorem for LHS guarantees, more generally for -estimators, that
where and is the Jacobian of the estimating function. The variance reduction is directly traceable to perfect marginal stratification removing first-order (main-effect) variance; compared to the i.i.d. estimator, the LHS estimator always has equal or lower asymptotic variance when main effects are substantial (Hakimi, 10 Feb 2025).
3. Comparative Performance and Function Typology
Empirical convergence analysis across various analytic test functions yields key performance regimes (Kucherenko et al., 2015):
| Function Type | MC Exponent | QMC Exponent | LHS Exponent |
|---|---|---|---|
| Type A (Low ) | 0.5 | up to 0.94 | 0.5 |
| Type B (Low-order interactions) | 0.5 | up to 0.96 | $0.69$--$0.75$ |
| Type C (High-order interactions) | 0.5 | $0.64$--$0.68$ | 0.5 |
- Type B (superposition dimension ): LHS improves on MC and may outperform QMC for small due to alignment with low-order effects.
- Type A/C: LHS and MC are similar unless is very small; QMC is strictly superior asymptotically.
- Guideline: For functions with additive or low-order dominant structure and small-to-moderate sample sizes, LHS is advantageous; QMC is generally preferable for unknown typology and large (Kucherenko et al., 2015).
4. Generalizations and Design Extensions
4.1 Partially Stratified and Latinized Stratified Sampling
Partially Stratified Sampling (PSS) interpolates between full univariate LHS (variance reduction for main effects only) and full stratified sampling (SS, variance reduction for interactions) (Shields et al., 2015).
- PSS Construction: Partition the space into subspaces (blocks), perform SS in each, and combine via random pairing.
- Latinized Stratified Sampling (LSS): Assigns LHS structure within each block, leveraging orthogonal arrays for simultaneous marginal and low-order interaction stratification:
- Practical guidance: Use PSS or LSS when interactions are expected; group variables with strong Sobol’ indices for block stratification.
4.2 LHS with Dependent Inputs
Classical LHS assumes independent marginals; for dependent structures, extensions such as:
- Copula-based LHSD: Transforms i.i.d. samples via a specified copula to recover dependences; a central limit theorem applies under bounded variation and right-continuity (Aistleitner et al., 2013).
- Quantization-based LHS: Uses Voronoi vector quantization for empirical joint structures, preserving stratification without explicit copula knowledge; unbiasedness and variance reduction demonstrated in environmental models (Lambert et al., 2024).
- Both methods yield lower variance than MC if dependence structure is known or accurately quantized.
4.3 Constrained and Adaptive LHS
- Constraint handling: Standard LHS fails for simplex, mixture, or synthesis constraints. Sequential or divide-and-conquer LHS (e.g., CASTRO) constructs feasible points component-wise, partitioning high-dimensional problems and enforcing linear constraints, with LHSMDU for multidimensional uniformity (Schenk et al., 2024).
- Adaptive/Sequential LHS: Approaches such as Local Latin Hypercube Refinement (LoLHR) and sequential hierarchical stratification dynamically allocate LHS samples to regions of high importance, guided by surrogate models, sensitivity indices, or clustering (Bogoclu et al., 2021, Krumscheid et al., 2023).
4.4 Expansion and Replication
- “LHS in LHS” expansion: Incrementally grows an existing LHS by regridding and filling empty bins with new LHS samples, quantified by a “degree of LH-ness” (Boschini et al., 29 Aug 2025).
- Replicated LHS for Sobol’ indices: Replicated designs enable efficient reordering (“permutation trick”) for computing first-order and total sensitivity indices without additional simulations; averaging schemes reduce estimator variance (Damblin et al., 2021).
5. Space-Filling, Discrepancy, and Optimization Criteria
LHS is widely used as a “space-filling” design, but raw LHS can exhibit clustering or subprojection non-uniformity in high . Optimizing LHS using:
- Center -discrepancy or wrap-around discrepancy: Ensures uniform coverage in all low-dimensional projections, essential for robust screening and model calibration (Damblin et al., 2013).
- Maximin criterion: Maximizes the minimal pairwise Euclidean distance, promoting even spacing; effective for moderate , but projection uniformity can deteriorate.
- Minimum Spanning Tree criteria: Maximizes mean edge length and minimizes variance, offering a balance between regularity and randomness.
- Enhanced Stochastic Evolutionary (ESE) and Simulated Annealing methods support efficient optimization of these criteria for large , (Damblin et al., 2013, Boschini et al., 29 Aug 2025).
6. Negative Dependence, Discrepancy Bounds, and High-Dimensional Coverage
LHS random variables exhibit -negative dependence ( for -dimensional LHS) rather than strict negative orthant dependence (Doerr et al., 2021). This property ensures favorable large-deviation bounds and star-discrepancy scaling:
with exponentially small probability of exceeding this bound. Coverage of -dimensional projections is determined solely by , not ; the empirical law
yields high coverage with trials, even as ambient dimension (Burrage et al., 2015). Orthogonal Sampling (OS) further enforces uniform sub-block coverage, matching or exceeding LHS in subspace uniformity.
7. Practical Guidance and Limitations
- Sample size selection: LHS exhibits variance reduction and RMSE benefits at moderate when main effects or additive structure dominate; otherwise, Quasi-Monte Carlo or fully stratified designs may outperform for large or strong interactions.
- Constraint management: For non-rectangular domains, standard LHS is infeasible; use constrained/sequential or quantization-based LHS (Schenk et al., 2024, Lambert et al., 2024).
- Expanding designs: Incremental expansion while preserving stratification requires dedicated algorithms (Boschini et al., 29 Aug 2025).
- Optimization: For predictive screening, optimize LHS discrepancy for robustness of subprojections; for maximal distance, use maximin or MST-based criteria.
- High dimension: Negative dependence constants () mildly impact error bounds; practical utility remains strong for , with variance-reduced estimators and empirical discrepancy bounds holding for larger .
LHS remains central in surrogate-based optimization, sensitivity analysis, and uncertainty quantification, with ongoing advances in optimized, adaptive, and constraint-handling variants enabling its application to ever more complex scientific and engineering design challenges (Kucherenko et al., 2015, Shields et al., 2015, Boschini et al., 29 Aug 2025, Schenk et al., 2024, Lambert et al., 2024, Damblin et al., 2013, Doerr et al., 2021).