Autoregressive LLMP Models
- A-LLMP is a class of autoregressive time-series models featuring long memory via power-law kernels or heavy-tailed Mittag–Leffler innovations, generalizing AR and ARMA processes.
- These models exhibit subdiffusive dynamics and non-classical fluctuation regimes, with scaling properties characterized by the Hurst exponent and alternative dependence measures.
- Parameter estimation leveraging empirical Laplace transforms and Fourier-space methods confirms robust inference, with validations from simulation studies and high-frequency empirical data.
The autoregressive long-term memory process (A-LLMP) encompasses a class of autoregressive time-series models characterized by either (i) a power-law memory kernel generating self-affine, long-memory Gaussian processes, or (ii) stationary non-Gaussian processes with heavy-tailed Mittag–Leffler (ML) marginals or innovations. These frameworks, formulated in (Sakaguchi et al., 2015) and (Dhull, 10 Jan 2026), generalize classical AR and ARMA/ARFIMA processes by introducing either explicit power-law memory or non-standard, infinitely divisible noise laws. In both cases, the resultant dynamics exhibit anomalous fluctuation regimes, non-trivial moment properties, and non-classical estimation challenges.
1. Formal Definitions and Model Classes
Two distinct A-LLMP model classes are established in the literature:
- Infinite-order Gaussian A-LLMP with Power-law Memory (Sakaguchi et al., 2015):
where , (Riemann zeta function), (memory exponent), and i.i.d. Gaussian ().
- AR(1) Process with Mittag–Leffler Component ("LLMP-AR(1)") (Dhull, 10 Jan 2026):
with either (A) ML() marginals (i.e., ML()), or (B) ML() i.i.d. innovations . The Mittag–Leffler law has Laplace transform
Both classes are "autoregressive with long memory," but differ fundamentally in the domain (Gaussian vs. heavy-tailed), the mechanism (kernel vs. innovation law), and the analytical methods required for their study.
2. Memory Kernels and Fluctuation Scaling
In the infinite-order Gaussian A-LLMP, the power-law kernel directly encodes long memory. The fundamental feature is the scaling of root-mean-square displacement (RMSD)
as a function of lag . Using discrete Fourier analysis, the variance increment has the explicit form
where contains the power-law memory (Sakaguchi et al., 2015). For small lags ,
with the Hurst exponent controlled by (and weakly by ). Larger yield saturation () due to stationarity (). For , the model approaches a nonstationary fractional scaling regime, analogous to ARFIMA($0,d,0$), though with a distinct kernel construction.
3. Mittag–Leffler AR(1) Structure, Marginals, and Innovations
In the LLMP-AR(1) paradigm, two structurally distinct regimes are considered (Dhull, 10 Jan 2026):
- A. Marginal ML(): The AR(1) is constructed so strictly stationary. The Laplace recursion yields the required innovation law:
The explicit density is obtained as a contour integral.
- B. Innovation ML(): Taking the as i.i.d. ML(), the MA() form shows the marginal Laplace transform:
Both are heavy-tailed, possess only fractional moments up to order , and lack finite variance. Classical second-order autocorrelations are undefined; alternative measures (codifference, fractional covariation) are considered but not explicitly derived in (Dhull, 10 Jan 2026).
4. Statistical Estimation by Empirical Laplace Methods
Parameter estimation in LLMP-AR(1) models leverages the empirical Laplace transform:
for observed samples . For the time series , residuals are computed for candidate . The loss
is minimized for . Consistency and asymptotic normality with -rate hold under standard regularity assumptions. Simulations with , for and demonstrate that the root-mean-square error (RMSE) and mean absolute error (MAE) of estimates are in all parameters, and the method yields concentrated boxplots around the true values (Dhull, 10 Jan 2026).
5. Analytical Methods and Hurst/Memory Exponent Relation
For Gaussian A-LLMPs, analytical relationships between fluctuation exponents and kernel parameters are established via Yule–Walker equations. Using the autocovariance ansatz with , the first three Yule–Walker equations yield coupled constraints for and (Sakaguchi et al., 2015). Numerical solution provides , showing that arbitrary subdiffusive dynamics () can be realized by tuning . Fast RMSD evaluation leverages Fourier-space sums, while direct time-series simulation enables empirical increment computation.
In the LLMP-AR(1) case, closed-form expressions via Laplace transforms dictate both stationary marginals and required innovation laws. The nonexistence of higher moments and classical autocorrelation necessitates alternative statistical tools for fluctuation and dependence analysis.
6. Implications, Applications, and Empirical Evidence
A-LLMPs provide a mathematically controlled approach to generating processes with self-affine, subdiffusive scaling at small lags—encompassing both Gaussian long-memory and heavy-tailed, infinite-variance regimes. By tuning model parameters ( or ), the scaling of small-m fluctuations can be prescribed throughout the admissible range ( for Gaussian, heavy tails for ML).
Empirical Laplace-based inference on high-frequency trading inter-arrival data highlights the appropriateness of the ML law in capturing observed heavy tails (Dhull, 10 Jan 2026). A plausible implication is that such autoregressive mechanisms may underlie observed non-Gaussian scaling in finance and complex systems, where both long-memory and heavy-tailed fluctuations coexist.
These models extend the theoretical toolkit beyond AR, ARMA, and ARFIMA, enabling precise analysis of anomalous time-series fluctuation regimes, both in physically motivated Gaussian contexts and in heavy-tailed, high-frequency empirical domains.