Double Local-to-Unity: Estimation under Nearly Nonstationary Volatility (2512.06823v1)
Abstract: This article develops a moderate-deviation limit theory for autoregressive models with jointly persistent mean and volatility dynamics. The autoregressive coefficient is allowed to drift toward unity slower than the classical 1/n rate, while the volatility persistence parameter also converges to one at an even slower, logarithmic order, so that the conditional variance process is itself nearly nonstationary and its unconditional moments may diverge. This double localization allows the variance process to be nearly nonstationary and to evolve slowly, as observed in financial data and during asset price bubble episodes. Under standard regularity conditions, we establish consistency and distributional limits for the OLS estimator of the autoregressive coefficient that remains valid in the presence of highly persistent stochastic volatility. We show that the effective normalization for least squares inference is governed by an average volatility scale, and we derive martingale limit theorems for the OLS estimator under joint drift and volatility dynamics. In a mildly stationary regime (where the autoregressive root approaches one from below), the OLS estimator is asymptotically normal. In a mildly explosive regime (where the root approaches one from above), an OLS based self normalized statistic converges to a Cauchy limit. Strikingly, in both regimes, the limiting laws of our statistics are invariant to the detailed specification of the volatility process, even though the conditional variance is itself nearly nonstationary. Overall, the results extend moderate-deviation asymptotics to settings with drifting volatility persistence, unify local to unity inference with nearly nonstationary stochastic volatility, and deliver practically usable volatility robust statistics for empirical work in settings approaching instability and exhibiting bubbles.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.