Papers
Topics
Authors
Recent
2000 character limit reached

Local Asymptotic Normality (LAN) Property

Updated 2 January 2026
  • LAN is defined as the quadratic (Gaussian) approximation of the log-likelihood under local perturbations, serving as a foundation for asymptotic efficiency and minimax bounds.
  • It is applied in diverse settings such as stochastic differential equations, jump processes, and high-frequency models to facilitate rigorous estimation in non-standard frameworks.
  • Advanced techniques like spectral methods, orthogonalization of central sequences, and Malliavin calculus are employed to verify LAN and extract precise information in complex statistical models.

Local Asymptotic Normality (LAN) property is a fundamental conceptual and technical cornerstone in modern asymptotic statistics, characterizing when a statistical family of probability measures, under regular local perturbations of a parameter, admits a quadratic (Gaussian) approximation to the log-likelihood. This enables rigorous derivation of asymptotic minimax bounds, sharp efficiency results, and deep connections to the theory of optimal estimation in complex models such as stochastic differential equations, jump processes, high-frequency observations, and infinite-dimensional settings.

1. Formal Definition and Structural Essence

Let {Pθ(n):θΘRd}\{\mathbb{P}_\theta^{(n)}: \theta\in\Theta\subset\mathbb{R}^d\} be a family of probability measures on a sequence of measurable spaces (Xn,Fn)(\mathcal{X}_n,\mathcal{F}_n). The LAN property at a fixed θ0\theta_0 requires the existence of a rate matrix rn(θ0)Rd×dr_n(\theta_0)\in\mathbb{R}^{d\times d} and a positive-definite (or at least non-degenerate) Fisher information matrix I(θ0)I(\theta_0), such that, for each fixed hRdh\in\mathbb{R}^d,

logdPθ0+rn(θ0)h(n)dPθ0(n)(Xn)=hΔn(θ0)12hI(θ0)h+oPθ0(1),\log\frac{d\mathbb{P}_{\theta_0 + r_n(\theta_0) h}^{(n)}}{d\mathbb{P}_{\theta_0}^{(n)}}(X^n) = h^\top \Delta_n(\theta_0) - \frac{1}{2} h^\top I(\theta_0) h + o_{P_{\theta_0}}(1),

where Δn(θ0)dN(0,I(θ0))\Delta_n(\theta_0)\xrightarrow{d}N(0,I(\theta_0)) under Pθ0(n)P_{\theta_0}^{(n)} as nn\to\infty (Tran et al., 2013, Kohatsu-Higa et al., 2015, Brouste et al., 2016, Cai et al., 20 Oct 2025, Cai, 30 Dec 2025). The precise scaling in rnr_n and the limiting I(θ0)I(\theta_0) depend on the stochastic model and asymptotic regime.

2. LAN in SDEs, Driven Diffusions, and Models with Jumps

LAN characterizations have been established for a wide array of continuous and discrete-time stochastic processes, including ergodic diffusions (Kohatsu-Higa et al., 2015), diffusions with jumps or jump-diffusions (Tran et al., 2013), fractional noise models (Brouste et al., 2016, Cai et al., 20 Oct 2025, Cai, 30 Dec 2025, Liu et al., 2015, Chiba, 2018), and high-frequency sampled models with infinite activity jumps (Ivanenko et al., 2014).

A canonical example is for an SDE with jumps, observed at high frequency: dXt=b(θ,Xt)dt+σ(Xt)dBt+Rdc(Xt,z)N~(dt,dz),dX_t = b(\theta, X_t)dt + \sigma(X_t)dB_t + \int_{\mathbb{R}^d}c(X_{t-},z) \tilde{N}(dt,dz), with the log-likelihood expansion under local alternatives θn=θ0+u/(nΔn)\theta_n = \theta_0 + u/(\sqrt{n\Delta_n}): logdPθn(n)dPθ0(n)=uTSn12uTI(θ0)u+oP(1),\log \frac{dP_{\theta_n}^{(n)}}{dP_{\theta_0}^{(n)}} = u^T S_n - \frac{1}{2} u^T I(\theta_0) u + o_P(1), where SnS_n is a properly normalized score vector arising from the increments, and I(θ0)=Eπθ0[(θb(θ0,X))Tσ(X)2θb(θ0,X)]I(\theta_0)=E_{\pi_{\theta_0}}[(\partial_\theta b(\theta_0,X))^T \sigma(X)^{-2} \partial_\theta b(\theta_0,X)] (Kohatsu-Higa et al., 2015).

In the jump-diffusion framework: dXt=b(θ,Xt)dt+σ(θ,Xt)dWt+Zc(θ,Xt,z)(μ(dt,dz)ν(dz)dt),dX_t = b(\theta, X_{t-})dt + \sigma(\theta, X_{t-})dW_t + \int_Z c(\theta, X_{t-},z)(\mu(dt,dz) - \nu(dz)dt), under regularity, one obtains a pathwise LAN expansion with the Fisher information matrix structured in terms of both continuous and jump components (Tran et al., 2013).

Jump-driven Lévy processes exhibiting local α\alpha-stable behavior require nontrivial rate matrices—potentially non-diagonal if the process is asymmetric—reflecting collinearities in the likelihood expansion (Ivanenko et al., 2014).

3. High-Frequency and Semiparametric LAN: Rates, Non-Diagonality, and Information Structure

In high-frequency sampled models (e.g., fractional Brownian motion, mixed fBM, and fractional Gaussian noise), the score components for different parameters (e.g., volatility σ\sigma and Hurst index HH) are often asymptotically collinear, necessitating non-diagonal rate matrices in the LAN expansion to achieve non-degenerate normal limits (Brouste et al., 2016, Cai, 30 Dec 2025). For mixed fBM, the central sequence must be constructed via an orthogonalization (lower-triangular) transform to ensure invertibility of the Fisher information matrix and efficiency of the LAN property (Cai, 30 Dec 2025).

For infinite-dimensional (nonparametric) diffusions, as in reflected scalar diffusions with unknown drift in C1C^1, the LAN property is realized in terms of abstract Hilbert space operators. The LAN norm and information operator are defined via the score operator AbA_b and its adjoint AbA_b^*, and the limit experiment corresponds to a Gaussian shift in Hilbert space (Wang, 2018).

Model Type Scaling (rnr_n) Key Feature
Ergodic diffusion (nΔn)1/2(n\Delta_n)^{-1/2} Standard rate if full obs.
fBM/fGn (high-freq) Non-diagonal RnR_n, dependent on HH Collinearity of scores, phase transition at H=1/2,3/4H=1/2,3/4
Jump models (Lévy) Matrix with off-diagonal terms Asymmetry and drift-skew interaction
SDE with periodicity diag(n1/2,n3/2)\text{diag}(n^{-1/2}, n^{-3/2}) (shape, period) Mixed rates, Fisher matrix block structure

4. LAN in Fractional and Mixed Fractional Models

In models with fractional noise, including SDEs driven by BHB^H (H1/2H\neq1/2), or mixed processes Yt=σBtH+WtY_t=\sigma B^H_t + W_t, the LAN property can be established, but parameter identification and minimax bounds depend heavily on the interplay of sample frequency, the value of HH, and the influence of the white noise (Brouste et al., 2016, Cai, 30 Dec 2025).

In particular, for mixed fractional OU processes with H>3/4H>3/4, the process is a semimartingale and the likelihood can be handled via an explicit innovation decomposition using Volterra equation resolvents (Cai et al., 20 Oct 2025). The Fisher information matrix is then explicitly computable via spectral integrals involving the derivative of the log spectral density.

Degeneracy in the Fisher matrix (as H3/4H\uparrow3/4 or for small HH) leads to a need for careful orthogonalization of central sequences, as certain directions in parameter space become unidentifiable or nearly collinear at the LAN scale (Cai, 30 Dec 2025).

5. Statistical Implications: Minimax Bounds, Efficiency, and Limit Experiments

LAN forms the theoretical foundation for sharp asymptotic lower bounds, optimality, and the structure of limiting statistical experiments. Once LAN holds, the experimental sequence is locally asymptotically equivalent to a Gaussian shift: any estimator sequence θ^n\hat\theta_n satisfies

n(rate)(θ^nθ0)N(0,I(θ0)1),\sqrt{n\,\text{(rate)}} (\hat\theta_n - \theta_0) \Longrightarrow N(0, I(\theta_0)^{-1}),

with the maximum likelihood estimator (MLE) being asymptotically efficient (Tran et al., 2013, Kohatsu-Higa et al., 2015, Liu et al., 2015, Cai, 30 Dec 2025, Cai et al., 20 Oct 2025).

Minimax lower bounds (Hájek–Le Cam theorem) follow directly, formally characterizing the (lower) bound for any regular estimator's risk as the risk of estimation in a Gaussian shift with the same information. Contiguity of the shifted experiments (mutual absolute continuity in the LAN expansion) is pivotal for local minmax theory (Cai, 30 Dec 2025).

In specialized settings (multi-armed bandits, (Akker et al., 13 Dec 2025); mean-field interactions in McKean-Vlasov SDEs, (Maestra et al., 2022, Heidari et al., 17 Nov 2025)), the LAN property holds under specific regularity and identifiability conditions, with explicit block or composite forms for the Fisher information reflecting adaptive or interacting sampling designs.

6. Advanced Extensions: Mixed Normality, Cubic Approximations, and Monte Carlo

Beyond classical LAN, one encounters local asymptotic mixed normality (LAMN) for models with random/degenerate limiting information (especially in non-ergodic cases or under partial observation) (Fukasawa et al., 2020, Tran et al., 2013). Here, the limit experiment is a family of mixed (Gaussian with random covariance) shifts, quantified precisely by random Fisher information.

Recent advances include "Rescaled Local Asymptotic Normality" (RLAN), which expands the asymptotic quadratic approximation to hold on larger, n1/4n^{-1/4} neighborhoods, allowing for efficient cubic-maximum-likelihood estimation, and statistical efficiency even under scaling Monte Carlo error (Ning et al., 2020). The RLAN extension requires strong smoothness and moment bounds, but provides tools for valid estimation using noisy, simulation-based log-likelihood computations.

7. Methodological Tools and Proof Techniques

The establishment and verification of the LAN property relies on advanced techniques:

These technical developments underpin contemporary statistical inference for stochastic dynamical systems, clarify phase transitions in estimation rates, and support efficiency in the broad field of stochastic models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Local Asymptotic Normality (LAN) Property.