Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Two-Way Transformed Factor Model for Matrix-Variate Time Series (2011.09029v1)

Published 18 Nov 2020 in econ.EM and stat.ME

Abstract: We propose a new framework for modeling high-dimensional matrix-variate time series by a two-way transformation, where the transformed data consist of a matrix-variate factor process, which is dynamically dependent, and three other blocks of white noises. Specifically, for a given $p_1\times p_2$ matrix-variate time series, we seek common nonsingular transformations to project the rows and columns onto another $p_1$ and $p_2$ directions according to the strength of the dynamic dependence of the series on the past values. Consequently, we treat the data as nonsingular linear row and column transformations of dynamically dependent common factors and white noise idiosyncratic components. We propose a common orthonormal projection method to estimate the front and back loading matrices of the matrix-variate factors. Under the setting that the largest eigenvalues of the covariance of the vectorized idiosyncratic term diverge for large $p_1$ and $p_2$, we introduce a two-way projected Principal Component Analysis (PCA) to estimate the associated loading matrices of the idiosyncratic terms to mitigate such diverging noise effects. A diagonal-path white noise testing procedure is proposed to estimate the order of the factor matrix. %under the assumption that the idiosyncratic term is a matrix-variate white noise process. Asymptotic properties of the proposed method are established for both fixed and diverging dimensions as the sample size increases to infinity. We use simulated and real examples to assess the performance of the proposed method. We also compare our method with some existing ones in the literature and find that the proposed approach not only provides interpretable results but also performs well in out-of-sample forecasting.

Summary

We haven't generated a summary for this paper yet.