Dependent Triangular Array Threshold Models
- The paper establishes precise limit results for sums, maxima, and rare event probabilities under threshold nonlinearity and dependence, advancing risk and time series analysis.
- It employs techniques such as compound Poisson approximations, covariance-based central limit theorems, and sharp large deviations to tackle asymptotic challenges in high-dimensional settings.
- The work outlines flexible estimation procedures and threshold selection methods, with applications spanning portfolio risk management, nonlinear time series, and extreme value analysis.
Dependent Triangular Array Threshold Models constitute a unified framework for the paper of limit theorems, rare event probabilities, estimation, and model specification in systems where data are organized as triangular arrays and the phenomenon of interest is governed by threshold-type nonlinearity under dependence. Such models arise naturally across nonlinear time series analysis, portfolio risk, high-dimensional statistics, spatial processes, heavy-tailed modeling, and matrix-valued time series. The central analytical challenge is precisely quantifying asymptotic behavior—whether convergence to Poisson-type limits, CLTs, or sharp large deviations—in the presence of dependencies and the combinatorial structure imposed by triangular arrays.
1. Mathematical Foundations and Model Structures
Dependent triangular array threshold models typically involve an -row array , where the distribution of may depend on , positioning, or underlying latent random effects. In threshold mechanisms, a process is governed by switching regimes across a (possibly unknown) threshold. For example, in the threshold autoregressive (TAR) setting,
with i.i.d. noise and unknown threshold parameter (Chigansky et al., 2011).
Matrix-valued time series generalize this concept to two-way arrays, with potentially distinct thresholds for rows and columns, as in the two-way matrix autoregressive (2-MART) model:
introducing independent thresholding along two dimensions (Yu et al., 14 Jul 2024).
Other settings include arrays of maxima or exceedances (in risk and extremes), or triangular arrays induced by high-dimensional or time-dependent models with growing dimension and dependence complexity (Han et al., 2019).
2. Limit Theorems: Poisson, Gaussian, and Extreme Value Regimes
The analysis of dependent triangular arrays often focuses on deriving precise limit laws for sums, maxima, or functionals over the array:
- Compound Poisson Approximation: Weak convergence of sums toward a compound Poisson limit is established via Tikhomirov’s method. The characteristic function satisfies an ODE that is transferred to the prelimit sum via bounds on the discrepancy and error term . Under appropriate negligibility and mixing conditions, the difference , ensuring convergence in law. This applies in contexts where threshold events are rare, but weakly dependent—typified by discontinuous likelihoods in threshold estimation (Chigansky et al., 2011).
- Central Limit Theorems: For -dependent arrays, a general CLT is given under a Lindeberg-type condition, including the case where the dependence range grows with the sample size (Janson, 2021). A more general CLT uses covariance-based sufficient conditions over “affinity sets,” accommodating arbitrary dependence structures, mixing, and dependency graphs (Chandrasekhar et al., 2023). The Stein method establishes that the normalized sum converges to standard normal, provided covariance sums within and across affinity sets are of lower asymptotic order than the principal variance.
- Extreme Value Theory: In Gaussian arrays, normalized maxima converge to the Hüsler–Reiss distribution under weak dependence. Under strong dependence, the limit is a mixture of Gaussian and Hüsler–Reiss. Remarkably, maxima and minima remain asymptotically independent, allowing decoupled analyses of extreme risks (Hashorva et al., 2014).
3. Sharp Large Deviations and Rare Event Analysis
Precise asymptotic estimates for rare events—such as large losses in portfolio credit risk—are obtained via Laplace–Olver asymptotics and conditional Bahadur–Rao refinements. These establish distinct scaling regimes:
- Bahadur–Rao (Light Tails): With Gaussian or exponential-power latent factor tails, exceedance probabilities have scaling with polylogarithmic corrections.
- Heavy-Tailed (Regular Variation): When common factors have regularly varying tails, the scaling is polynomial with an index determined by the tail exponent.
- Endpoint/Boundary Cases: For bounded-support factors, the prefactor becomes ; degenerate boundaries revert to the norm (Deng et al., 23 Sep 2025).
The Gibbs conditioning principle in total variation reveals that conditioning on a rare event (e.g., large portfolio loss) induces asymptotic independence: individual components become i.i.d. under an exponential tilt, substantiating risk decomposition and univariate analysis in rare-event regimes.
4. Dependence Structures and Probability Inequalities
Quantification and control of dependence in triangular arrays is central. The literature details mixing coefficients (α, β, φ, ρ), weak dependence measures (θ, η, κ, λ), functional dependence (as in Wu’s ), and τ-coupling (Han et al., 2019). In high-dimensional or triangular settings, classical mixing bounds may deteriorate with growing dimension and are insufficient; caution is warranted, and nonasymptotic inequalities (Bernstein, Nagaev, Rosenthal-type) should be derived explicitly for each row. Table 1 in (Han et al., 2019) tabulates dependence measures and associated tail/moment inequalities, emphasizing the need for custom analysis in threshold models.
Covariance-based CLTs (Chandrasekhar et al., 2023) synthesize these structural conditions, nesting mixing, M-dependence, and dependency graphs. Practitioners are advised to aggregate covariance within affinity sets and verify three scaling conditions, leveraging Stein’s method for asymptotic normality.
5. Threshold Selection, Estimation Procedures, and Predictive Modelling
Threshold estimation—especially in heavy-tailed or multivariate setups—requires not only identification of rare events but also inference on the threshold parameter:
- Distance Covariance-Based Selection: Independence of radial () and angular () components for large under regular variation is tested via conditional distance covariance statistics. Subsampling algorithms are used for computational efficiency and to handle weak dependence (Wan et al., 2017).
- Varying-Threshold Modelling: Flexible models allow predictive effects of covariates to change continuously across the threshold, generalizing classical regression and ordinal models. Parameters are estimated across a grid of thresholds (binary stacking) with monotonicity correction, or via constrained maximum likelihood (using spline expansions) when structure can be exploited. Nonparametric models (e.g., random forests) further increase flexibility and can capture nonlinear interactions (Tutz, 2021).
- Two-Way Threshold VAR for Arrays: The 2-MART methodology extends single-variable thresholding to matrix time series, implementing thresholding mechanisms for both rows and columns. Estimation uses iterative least squares and grid search, with improved dimension reduction and interpretability (Yu et al., 14 Jul 2024).
6. Implications, Applications, and Transferable Techniques
Threshold models on dependent triangular arrays have broad applicability:
- Risk Management: Sharp large deviation estimates and Gibbs conditioning inform the accurate calculation of Value-at-Risk (VaR) and Expected Shortfall (ES) in portfolios; the prefactor analysis clarifies when portfolios operate in the genuine large-deviation regime (Deng et al., 23 Sep 2025).
- Statistical Inference: Compound Poisson and covariance-based CLTs provide foundational asymptotic theory for estimation, confidence intervals, and hypothesis testing, particularly in singular and nonlinear regimes (Chigansky et al., 2011, Chandrasekhar et al., 2023).
- High-Dimensional Time Series: Probability inequalities support robust consistency and limit laws for estimators in evolving triangular arrays, mitigating the breakdown of classical mixing conditions (Han et al., 2019).
- Extreme Value Analysis: Hüsler–Reiss and mixture limits extend extreme value theory to dependent Gaussian arrays, supporting robust tail estimation in insurance and finance (Hashorva et al., 2014).
Techniques developed—localization, curvature analysis, tilt identification (for rare events), affinity set construction, and subsampling—are transferable across settings and facilitate robust rare-event analysis, model specification, and high-dimensional inference.
7. Future Directions and Challenges
- Non-Gaussian Dependencies: Extending CLTs and large deviation principles to settings with non-Gaussian and non-mixing structures requires further synthesis of covariance-based and functional dependence methods.
- Dynamic and Adaptive Thresholds: Incorporating time-varying or adaptive threshold mechanisms in high-dimensional and array-valued models remains a frontier area, both for computational methods and theoretical validation.
- Unified Frameworks: Integrating Poisson-type and Gaussian-type limits, large deviation principles, and threshold estimation under high-dimensional dependence will enable deeper understanding and broader applicability in scientific and financial risk modeling.
In summary, dependent triangular array threshold models encapsulate the intersection of dependence, nonlinearity, and rare-event analysis, providing powerful theoretical and practical tools for modern statistical and risk modeling. Robust asymptotic principles, flexible estimation procedures, and transferability of techniques underpin their utility across fields demanding precise quantification of probabilities, tail risk, and threshold-induced discontinuities.