Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Penalised complexity priors for stationary autoregressive processes (1608.08941v1)

Published 31 Aug 2016 in stat.ME

Abstract: The autoregressive process of order $p$ (AR($p$)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR($p$) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior, to ensure that it behaves according to the users prior knowledge. In this paper, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These priors have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR($p$) is the corresponding AR($p-1$) model expressed using the partial autocorrelations. The properties of the new prior are compared with the reference prior in a simulation study.

Summary

We haven't generated a summary for this paper yet.