Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 97 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 38 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 466 tok/s Pro
Kimi K2 243 tok/s Pro
2000 character limit reached

Bayesian Distributed Lag Models (1801.06670v1)

Published 20 Jan 2018 in stat.AP

Abstract: Distributed lag models (DLMs) express the cumulative and delayed dependence between pairs of time-indexed response and explanatory variables. In practical application, users of DLMs examine the estimated influence of a series of lagged covariates to assess patterns of dependence. Much recent methodological work has sought to de- velop flexible parameterisations for smoothing the associated lag parameters that avoid overfitting. However, this paper finds that some widely-used DLMs introduce bias in the estimated lag influence, and are sensitive to the maximum lag which is typically chosen in advance of model fitting. Simulations show that bias and misspecification are dramatically reduced by generalising the smoothing model to allow varying penalisation of the lag influence estimates. The resulting model is shown to have substantially fewer effective parameters and lower bias, providing the user with confidence that the estimates are robust to prior model choice.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)