An Efficient and Interpretable Autoregressive Model for High-Dimensional Tensor-Valued Time Series (2506.01658v1)
Abstract: In autoregressive modeling for tensor-valued time series, Tucker decomposition, when applied to the coefficient tensor, provides a clear interpretation of supervised factor modeling but loses its efficiency rapidly with increasing tensor order. Conversely, canonical polyadic (CP) decomposition maintains efficiency but lacks a precise statistical interpretation. To attain both interpretability and powerful dimension reduction, this paper proposes a novel approach under the supervised factor modeling paradigm, which first uses CP decomposition to extract response and covariate features separately and then regresses response features on covariate ones. This leads to a new CP-based low-rank structure for the coefficient tensor. Furthermore, to address heterogeneous signals or potential model misspecifications arising from stringent low-rank assumptions, a low-rank plus sparse model is introduced by incorporating an additional sparse coefficient tensor. Nonasymptotic properties are established for the ordinary least squares estimators, and an alternating least squares algorithm is introduced for optimization. Theoretical properties of the proposed methodology are validated by simulation studies, and its enhanced prediction performance and interpretability are demonstrated by the El Ni$\tilde{\text{n}}$o-Southern Oscillation example.