Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 189 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Exponentially Weighted Moving Models (2404.08136v2)

Published 11 Apr 2024 in stat.CO, eess.SP, math.OC, q-fin.CP, and stat.ML

Abstract: An exponentially weighted moving model (EWMM) for a vector time series fits a new data model each time period, based on an exponentially fading loss function on past observed data. The well known and widely used exponentially weighted moving average (EWMA) is a special case that estimates the mean using a square loss function. For quadratic loss functions EWMMs can be fit using a simple recursion that updates the parameters of a quadratic function. For other loss functions, the entire past history must be stored, and the fitting problem grows in size as time increases. We propose a general method for computing an approximation of EWMM, which requires storing only a window of a fixed number of past samples, and uses an additional quadratic term to approximate the loss associated with the data before the window. This approximate EWMM relies on convex optimization, and solves problems that do not grow with time. We compare the estimates produced by our approximation with the estimates from the exact EWMM method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Georges Darmois “Sur les lois de probabilites estimation exhaustive” In C.R. Acad. Sci. Paris 200, 1935, pp. 1265–1266
  2. Bernard Koopman “On distributions admitting a sufficient statistic” In Transactions of the American Mathematical Society 39.3, 1936, pp. 399–409
  3. Edwin Pitman “Sufficient statistics and intrinsic accuracy” In Mathematical Proceedings of the Cambridge Philosophical Society 32.4, 1936, pp. 567–579 Cambridge University Press
  4. R. Brown “Exponential smoothing for predicting demand” In Philip Morris Records; Master Settlement Agreement, 1956, pp. 15 URL: https://www.industrydocuments.ucsf.edu/docs/jzlc0130
  5. Ilya Sobol “On the distribution of points in a cube and the approximate evaluation of integrals” In Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 7.4 Russian Academy of Sciences, Branch of Mathematical Sciences, 1967, pp. 784–802
  6. H. Sorenson “Least-squares estimation: from Gauss to Kalman” In IEEE Spectrum 7.7, 1970, pp. 63–68 DOI: 10.1109/MSPEC.1970.5213471
  7. W. Christiaanse “Short-Term Load Forecasting Using General Exponential Smoothing” In IEEE Transactions on Power Apparatus and Systems PAS-90.2, 1971, pp. 900–911 DOI: 10.1109/TPAS.1971.293123
  8. R. Koenker and G. Bassett Jr “Regression quantiles” In Econometrica: Journal of the Econometric Society JSTOR, 1978, pp. 33–50
  9. W. Cleveland “Robust locally weighted regression and smoothing scatterplots” In Journal of the American statistical association 74.368 Taylor & Francis, 1979, pp. 829–836
  10. “The P2 algorithm for dynamic calculation of quantiles and histograms without storing observations” In Communications of the ACM 28.10 ACM New York, NY, USA, 1985, pp. 1076–1085
  11. “Local likelihood estimation” In Journal of the American Statistical Association 82.398 Taylor & Francis, 1987, pp. 559–567
  12. G. Golub and C. Van Loan “Matrix computations” JHU press, 1989
  13. Peter Huber “Robust Estimation of a Location Parameter” In Breakthroughs in Statistics: Methodology and Distribution New York, NY: Springer New York, 1992, pp. 492–518 DOI: 10.1007/978-1-4612-4380-9˙35
  14. “Sequential quadratic programming” In Acta Numerica 4 Cambridge University Press, 1995, pp. 1–51
  15. “Theory of the Combination of Observations Least Subject to Errors” SIAM, 1995
  16. Robert Tibshirani “Regression Shrinkage and Selection via the Lasso” In Journal of the Royal Statistical Society. Series B (Methodological) 58.1 [Royal Statistical Society, Wiley], 1996, pp. 267–288
  17. A.V. Oppenheim, R.W. Schafer and J.R. Buck “Discrete-time Signal Processing” Prentice Hall, 1999
  18. “Space-efficient online computation of quantile summaries” In SIGMOD Rec. 30.2 New York, NY, USA: Association for Computing Machinery, 2001, pp. 58–66
  19. Trevor Hastie, Robert Tibshirani and Jerome Friedman “The Elements of Statistical Learning: Data Mining, Inference, and Prediction” Springer, 2001
  20. Ruey Tsay “Analysis of Financial Time Series” John Wiley & Sons, 2002
  21. “Convex Optimization” Cambridge University Press, 2004
  22. Rudolph Van Der Merwe and Eric Wan “Sigma-point kalman filters for probabilistic inference in dynamic state-space models” Oregon Health & Science University, 2004
  23. Jerome Friedman, Trevor Hastie and Robert Tibshirani “Sparse inverse covariance estimation with the Lasso”, 2007 arXiv:0708.3517 [stat.ME]
  24. “Performance bounds for linear stochastic control” In Systems & Control Letters 58.3 Elsevier, 2009, pp. 178–182
  25. J. Menchero, D. Orr and J. Wang “The Barra US equity model (USE4), methodology notes” MSCI Barra, 2011
  26. Warren Powell “Approximate Dynamic Programming: Solving the Curses of Dimensionality” John Wiley & Sons, 2011
  27. “Quadratic approximate dynamic programming for input-affine systems” In International Journal of Robust and Nonlinear Control 24.3 Wiley Online Library, 2014, pp. 432–449
  28. “Time Series Analysis: Forecasting and Control” John Wiley & Sons, 2015
  29. Yang Wang, Brendan O’Donoghue and Stephen Boyd “Approximate dynamic programming via iterated Bellman inequalities” In International Journal of Robust and Nonlinear Control 25.10 Wiley Online Library, 2015, pp. 1472–1496
  30. “CVXPY: A Python-embedded modeling language for convex optimization” In Journal of Machine Learning Research 17.83, 2016, pp. 1–5
  31. Elad Hazan “Introduction to online convex optimization” In Foundations and Trends in Optimization 2.3-4 Now Publishers, Inc., 2016, pp. 157–325
  32. H. McMahan “A survey of algorithms and analysis for adaptive online learning” In The Journal of Machine Learning Research 18.1 JMLR.org, 2017, pp. 3117–3166
  33. “Algorithms for Optimization” Mit Press, 2019
  34. “Fitting Laplacian regularized stratified Gaussian models” In Optimization and Engineering Springer, 2021, pp. 1–21
  35. Dimitri Bertsekas “Abstract Dynamic Programming” Athena Scientific, 2022
  36. Mykel Kochenderfer, Tim Wheeler and Kyle Wray “Algorithms for Decision Making” MIT press, 2022
  37. “A Simple Method for Predicting Covariance Matrices of Financial Returns” In Foundations and Trends in Econometrics 12.4, 2023, pp. 324–407
  38. Kenneth French “French Data Library”, http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html, 2024

Summary

  • The paper introduces an approximate recursive method for EWMMs that uses a fixed-size window to manage non-quadratic loss functions.
  • It significantly reduces computational overhead and storage by approximating tail losses using Taylor expansion or a direct fitting method.
  • Extensive numerical experiments validate the approach in settings like quantile estimation, logistic regression, and sparse inverse covariance estimation.

Efficient Approximation Methods for Exponentially Weighted Moving Models

Introduction

Exponentially Weighted Moving Models (EWMMs) offer a flexible framework for fitting time-varying models to vector time series data. By assigning exponentially decaying weights to past observations, EWMMs prioritize more recent data without disregarding historical information. While EWMMs generalize the concept of Exponentially Weighted Moving Averages (EWMAs), their utility extends to a broad array of models, including quantile estimation, covariance estimation, and regression models amongst others.

The fundamental challenge in employing EWMMs lies in their computation. For models with quadratic loss functions, an exact recursive computation akin to EWMAs is possible. However, for non-quadratic loss functions, direct computation necessitates storing the entire history of data, rendering the process increasingly impractical over time. This paper introduces an approximation method for EWMMs that circumvents this issue by maintaining only a fixed-size window of recent observations, thereby ensuring constant computation and storage requirements over time.

Exponentially Weighted Moving Models Overview

The EWMM framework integrates both historical and recent data in model fitting, with the significance of each data point diminishing exponentially into the past. This setup offers a more reactive and adaptable model compared to traditional rolling window models (RWMs), which equally weigh all observations within a fixed window. The flexibility of EWMMs is illustrated through their capability to estimate various statistics and parameters, such as quantiles and covariance matrices, effectively capturing the underlying dynamics of time series data.

Recursive and Approximate Methods

For quadratic loss functions, EWMMs benefit from a straightforward recursive computation methodology, significantly reducing the computational overhead by avoiding the need to revisit the entire data history. This approach is directly applicable to a range of models, including exponentially weighted least squares and sparse inverse covariance estimation.

Conversely, for non-quadratic loss functions, an exact recursive solution is absent. To address this, the paper innovates with an approximate recursion technique that relies on a fixed-size data window alongside a quadratic approximation of the "tail" loss – the loss attributable to data outside this window. This approximation is crafted through either a Taylor expansion method, suitable for twice-differentiable loss functions, or a direct fitting method that utilizes an additional subset of past data to refine the tail loss approximation.

Numerical Examples and Implications

The paper presents extensive numerical experiments to validate the effectiveness of the proposed approximate EWMM method in various settings, including quantile estimation, logistic regression, and sparse inverse covariance estimation. The results affirm that the approximations closely mirror the exact EWMMs while requiring significantly less computational resources. Particularly, the examples emphasize the method's adaptability to both smoothly varying datasets and scenarios requiring parameter regularization.

Conclusion and Future Work

By introducing a practical and computationally efficient method for approximating EWMMs, this paper addresses a critical gap in the application of time-varying models to large datasets. The approximation technique ensures that the models maintain their adaptability and accuracy over time without succumbing to the impracticalities of unbounded data storage and processing requirements.

The potential for future research includes exploring more sophisticated approximation methods for the tail loss, enhancing the efficiency of the approximation in high-dimensional settings, and investigating the integration of EWMMs with online learning algorithms to further bolster their applicability in real-time analysis scenarios.

Acknowledgements

The contribution of peers and notable scholars in the field for their invaluable feedback and suggestions during the development of this work is notably appreciated, cementing the foundational premise of EWMMs as a potent and flexible tool for time-varying model estimation in time series analysis.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We found no open problems mentioned in this paper.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 7 tweets and received 184 likes.

Upgrade to Pro to view all of the tweets about this paper: