Papers
Topics
Authors
Recent
Search
2000 character limit reached

Autoregressive LLMP Models

Updated 29 March 2026
  • A-LLMP is a class of autoregressive time-series models featuring long memory via power-law kernels or heavy-tailed Mittag–Leffler innovations, generalizing AR and ARMA processes.
  • These models exhibit subdiffusive dynamics and non-classical fluctuation regimes, with scaling properties characterized by the Hurst exponent and alternative dependence measures.
  • Parameter estimation leveraging empirical Laplace transforms and Fourier-space methods confirms robust inference, with validations from simulation studies and high-frequency empirical data.

The autoregressive long-term memory process (A-LLMP) encompasses a class of autoregressive time-series models characterized by either (i) a power-law memory kernel generating self-affine, long-memory Gaussian processes, or (ii) stationary non-Gaussian processes with heavy-tailed Mittag–Leffler (ML) marginals or innovations. These frameworks, formulated in (Sakaguchi et al., 2015) and (Dhull, 10 Jan 2026), generalize classical AR and ARMA/ARFIMA processes by introducing either explicit power-law memory or non-standard, infinitely divisible noise laws. In both cases, the resultant dynamics exhibit anomalous fluctuation regimes, non-trivial moment properties, and non-classical estimation challenges.

1. Formal Definitions and Model Classes

Two distinct A-LLMP model classes are established in the literature:

  1. Infinite-order Gaussian A-LLMP with Power-law Memory (Sakaguchi et al., 2015):

xn=γζ(β)k=1xnkkβ+εn,x_n = \frac{\gamma}{\zeta(\beta)} \sum_{k=1}^\infty \frac{x_{n-k}}{k^{\beta}} + \varepsilon_n,

where γ(0,1)\gamma\in(0,1), ζ(β)=k=1kβ\zeta(\beta)=\sum_{k=1}^\infty k^{-\beta} (Riemann zeta function), β>1\beta>1 (memory exponent), and εn\varepsilon_n i.i.d. Gaussian (εn2=σ2\langle \varepsilon_n^2\rangle=\sigma^2).

  1. AR(1) Process with Mittag–Leffler Component ("LLMP-AR(1)") (Dhull, 10 Jan 2026):

Yt=ρYt1+εt,ρ<1,Y_t = \rho Y_{t-1} + \varepsilon_t, \quad |\rho|<1,

with either (A) ML(α,1\alpha,1) marginals (i.e., YtY_t\sim ML(α,1\alpha,1)), or (B) ML(α,1\alpha,1) i.i.d. innovations εt\varepsilon_t. The Mittag–Leffler law has Laplace transform

φM(s)=E[esM]=11+sα,0<α1.\varphi_M(s) = \mathbb{E}[e^{-sM}] = \frac{1}{1 + s^{\alpha}},\quad 0<\alpha \leq 1.

Both classes are "autoregressive with long memory," but differ fundamentally in the domain (Gaussian vs. heavy-tailed), the mechanism (kernel vs. innovation law), and the analytical methods required for their study.

2. Memory Kernels and Fluctuation Scaling

In the infinite-order Gaussian A-LLMP, the power-law kernel K(k)kβK(k)\sim k^{-\beta} directly encodes long memory. The fundamental feature is the scaling of root-mean-square displacement (RMSD)

Δ(m)=(xn+mxn)21/2\Delta(m) = \langle (x_{n+m} - x_n)^2 \rangle^{1/2}

as a function of lag mm. Using discrete Fourier analysis, the variance increment has the explicit form

{Δ(m)}2=4σ2Nk=0N/211cos(ωkm)1F(ωk)2,\{\Delta(m)\}^2 = \frac{4\sigma^2}{N} \sum_{k=0}^{N/2-1} \frac{1 - \cos(\omega_k m)}{|1 - F(\omega_k)|^2},

where F(ωk)F(\omega_k) contains the power-law memory (Sakaguchi et al., 2015). For small lags mmsatm\ll m_\text{sat},

Δ(m)mH,H<12\Delta(m) \propto m^H, \quad H < \tfrac{1}{2}

with the Hurst exponent HH controlled by β\beta (and weakly by γ\gamma). Larger mm yield saturation (Δ(m)const\Delta(m)\to\text{const}) due to stationarity (γ<1\gamma<1). For γ1\gamma\to1, the model approaches a nonstationary fractional scaling regime, analogous to ARFIMA($0,d,0$), though with a distinct kernel construction.

3. Mittag–Leffler AR(1) Structure, Marginals, and Innovations

In the LLMP-AR(1) paradigm, two structurally distinct regimes are considered (Dhull, 10 Jan 2026):

  • A. Marginal ML(α,1\alpha,1): The AR(1) is constructed so YtML(α,1)Y_t\sim \mathrm{ML}(\alpha,1) strictly stationary. The Laplace recursion yields the required innovation law:

φε(s)=1+(ρs)α1+sα.\varphi_{\varepsilon}(s) = \frac{1+(\rho s)^\alpha}{1+s^\alpha}.

The explicit density is obtained as a contour integral.

  • B. Innovation ML(α,1\alpha,1): Taking the εt\varepsilon_t as i.i.d. ML(α,1\alpha,1), the MA(\infty) form shows the marginal Laplace transform:

φY(s)=i=011+(ρis)α.\varphi_Y(s) = \prod_{i=0}^\infty \frac{1}{1 + (\rho^i s)^\alpha}.

Both are heavy-tailed, possess only fractional moments up to order α\alpha, and lack finite variance. Classical second-order autocorrelations are undefined; alternative measures (codifference, fractional covariation) are considered but not explicitly derived in (Dhull, 10 Jan 2026).

4. Statistical Estimation by Empirical Laplace Methods

Parameter estimation in LLMP-AR(1) models leverages the empirical Laplace transform:

φn(s)=1ni=1nesZi\varphi_n(s) = \frac{1}{n} \sum_{i=1}^n e^{-sZ_i}

for observed samples {Zi}\{Z_i\}. For the time series {Yt}\{Y_t\}, residuals ε^t=YtρYt1\hat{\varepsilon}_t = Y_t - \rho Y_{t-1} are computed for candidate ρ\rho. The loss

Sn(θ)=j=1mwj[φn(sj)φ(sj;θ)]2S_n(\theta) = \sum_{j=1}^m w_j [\varphi_n(s_j) - \varphi(s_j;\theta)]^2

is minimized for θ=(α,ρ)\theta = (\alpha,\rho). Consistency and asymptotic normality with n\sqrt{n}-rate hold under standard regularity assumptions. Simulations with N=500N=500, n=1000n=1000 for (α,ρ)=(0.4,0.4)(\alpha,\rho)=(0.4,0.4) and (0.6,0.8)(0.6,0.8) demonstrate that the root-mean-square error (RMSE) and mean absolute error (MAE) of estimates are <0.06<0.06 in all parameters, and the method yields concentrated boxplots around the true values (Dhull, 10 Jan 2026).

5. Analytical Methods and Hurst/Memory Exponent Relation

For Gaussian A-LLMPs, analytical relationships between fluctuation exponents and kernel parameters are established via Yule–Walker equations. Using the autocovariance ansatz C(m)=C(0)[1βmp]C(m)=C(0)[1-\beta'm^p] with p=2Hp=2H, the first three Yule–Walker equations yield coupled constraints for β\beta' and pp (Sakaguchi et al., 2015). Numerical solution provides H(β)H(\beta), showing that arbitrary subdiffusive dynamics (H<1/2H<1/2) can be realized by tuning β>1\beta>1. Fast RMSD evaluation leverages Fourier-space sums, while direct time-series simulation enables empirical increment computation.

In the LLMP-AR(1) case, closed-form expressions via Laplace transforms dictate both stationary marginals and required innovation laws. The nonexistence of higher moments and classical autocorrelation necessitates alternative statistical tools for fluctuation and dependence analysis.

6. Implications, Applications, and Empirical Evidence

A-LLMPs provide a mathematically controlled approach to generating processes with self-affine, subdiffusive scaling at small lags—encompassing both Gaussian long-memory and heavy-tailed, infinite-variance regimes. By tuning model parameters (β\beta or α\alpha), the scaling of small-m fluctuations can be prescribed throughout the admissible range (H<1/2H<1/2 for Gaussian, heavy tails for ML).

Empirical Laplace-based inference on high-frequency trading inter-arrival data highlights the appropriateness of the ML law in capturing observed heavy tails (Dhull, 10 Jan 2026). A plausible implication is that such autoregressive mechanisms may underlie observed non-Gaussian scaling in finance and complex systems, where both long-memory and heavy-tailed fluctuations coexist.

These models extend the theoretical toolkit beyond AR, ARMA, and ARFIMA, enabling precise analysis of anomalous time-series fluctuation regimes, both in physically motivated Gaussian contexts and in heavy-tailed, high-frequency empirical domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Autoregressive LLMP (A-LLMP).