Papers
Topics
Authors
Recent
Search
2000 character limit reached

Trend-Adjusted Time Series Model

Updated 26 January 2026
  • Trend-Adjusted Time Series (TATS) models are a class of methods that decompose data into deterministic trends, seasonal components, and irregular residuals.
  • They integrate classical, state-space, Bayesian, and deep learning approaches to improve inference, prediction, and interpretability.
  • TATS models use adaptive parameter selection and dynamic breakpoint detection with criteria like BIC and AICc to enhance forecast accuracy across diverse datasets.

Trend-Adjusted Time Series (TATS) models are a broad class of methodologies designed to explicitly account for deterministic (or stochastic) trend, seasonal, and structural features in time series, providing adaptive decompositions for both univariate and multivariate contexts. The essential principle is to separate the evolving mean or directionality from stochastic residual or irregular components, enabling improved inference, prediction, and interpretability. TATS approaches span classical regression/filtering, state-space/Bayesian models, deep learning, and adaptive shrinkage frameworks. This article presents a comprehensive technical overview of key TATS methods and their theoretical and empirical properties.

1. Structural Decomposition and Model Architectures

TATS methodologies posit that an observed series yty_t or yity_{i t} (for multivariate pp-vector series) arises as the sum of a deterministic trend μt\mu_t, seasonal component γt\gamma_t, and an irregular or residual term ξt\xi_t:

yit=μit+γit+ξit,i=1,,p,  t=1,,Ty_{i t} = \mu_{i t} + \gamma_{i t} + \xi_{i t}, \quad i=1,\ldots,p,\; t=1,\ldots,T

This structure accommodates:

  • Polynomial or piecewise-linear trend models: Trends may be specified as polynomials (for stationarity or smooth evolution) or segmented linear functions with breakpoints, optionally subject to continuity constraints (Abdikhadir, 9 Jun 2025).
  • Deterministic seasonality via Fourier or trigonometric bases: Seasonal variation is expressed as sums of sines and cosines of known period ss (Gao et al., 2018, Abdikhadir, 9 Jun 2025).
  • Stochastic residual modeling: The irregular term can be captured via ARMA processes, factor models (for high-dimensional data), or latent states (Gao et al., 2018, Li et al., 2022).
  • State-space and Bayesian extensions: Trends, seasonality, and coefficients themselves can be modeled as random walks or latent processes evolving in a state-space framework (Hazra, 2015, Schafer et al., 2023).

Table: Decomposition in principal TATS frameworks

Model Trend Specification Seasonality Irregular Component
Gao & Tsay (Gao et al., 2018) Polynomial Trigonometric series Factor model (common/noise)
LTSTA (Abdikhadir, 9 Jun 2025) Piecewise-linear Fourier basis ARMA(p,q)
Hazra (Hazra, 2015) Latent random walk Lagged latent state Gaussian error

2. Model Selection, Estimation, and Trend Adaptivity

Selection of latent structural orders is critical for TATS models. In high-dimensional settings, polynomial trend order dd and seasonal order kk are chosen via marginal BIC minimization over candidate tuples for each series, with global orders d^,k^\hat d, \hat k assigned as the maxima over individual optima (Gao et al., 2018). These selectors are formally consistent as both T,pT,p \to \infty under regularity conditions.

Piecewise-linear trend models estimate break locations via dynamic programming, minimizing the sum of squared residuals (SSR) across feasible segmentation patterns (Abdikhadir, 9 Jun 2025). Seasonal and ARMA orders are typically determined via information criteria (AICc) or likelihood.

State-space and Bayesian versions apply Gibbs sampling, exploiting univariate normal conditionals for latent trends, seasonality, and variances, permitting simultaneous learning without matrix inversion or ad-hoc detrending (Hazra, 2015). Shrinkage priors further enhance adaptivity, allowing latent trend smoothness to vary over time and to accommodate local structural breaks or spikes (Schafer et al., 2023).

3. Deep Learning TATS Extensions

Recent advances infuse deep learning architectures into the TATS paradigm. In DeepVARwT, a multivariate VAR(pp) model is coupled with a deterministic trend generated by an LSTM network, which ingests exogenous time functions and outputs a nonlinear trend vector for each series (Li et al., 2022). Trend, VAR coefficients, and noise covariances are jointly estimated via maximum likelihood, enabling simultaneous identification of both persistent drift and cross-series temporal dependence.

To maintain stability, DeepVARwT enforces causality via the Ansley-Kohn transformation on the AR coefficient matrices, projecting unconstrained network outputs to the causal region through partial-autocorrelation recursion (Li et al., 2022).

Additionally, hybrid architectures decompose forecasting into trend direction (binary classification, e.g., via XGBoost) and quantitative estimation (regression, e.g., LSTM/Bi-LSTM), adjusting value forecasts based on classifier output (Kazemdehbashi, 19 Jan 2026). Corrective steps are triggered when regression and classifier disagree on the forecasted direction.

4. Algorithmic Steps, Forecasting Procedures, and Theoretical Guarantees

A generic TATS workflow entails:

  1. Structural fit: Regress observations on polynomial, segmented, or LSTM-generated trend + trigonometric seasonality.
  2. Residual modeling: Fit ARMA, factor, or stochastic state models to detrended/seasonally-adjusted residuals.
  3. Parameter selection: Employ BIC, AICc, or likelihood for optimal structural and residual orders.
  4. Factor/VAR model estimation: For multivariate series, extract common factors via canonical correlation or deep learning; fit VAR/VARIMA models to common components (Gao et al., 2018, Li et al., 2022).
  5. Forecasting: Update trend/seasonal parameters, predict latent states or factors recursively, and reconstruct forecasted observations via summation of deterministic and stochastic components.
  6. Posterior inference (Bayesian models): Generate MCMC draws for forecasting distribution, compute credible intervals, and propagate uncertainty.

Theoretical properties include root-TT consistency of trend/seasonal estimators, factor number selection consistency, and stable convergence rates for loading/error matrices in high-dimensional settings (Gao et al., 2018). For hybrid classifier+regressor TATS, expected MSE reduction is quantitatively linked to the respective accuracy of trend classifier vs. base forecaster (Kazemdehbashi, 19 Jan 2026).

5. Empirical Performance and Comparative Evaluation

Empirical validation covers financial, economic, environmental, and count-data time series:

  • On daily gold price forecasting, TATS with trend detection (XGBoost) and sequence regression (LSTM/Bi-LSTM) reduces test MSE by 13–17% over baseline neural models and elevates trend detection accuracy by over 20% (Kazemdehbashi, 19 Jan 2026).
  • The LTSTA TATS approach, decomposing US GDP into piecewise-linear trend, Fourier seasonality, and ARMA errors, accurately detects known economic breakpoints (e.g., 2008 crisis, COVID-19 shocks) and attains lower MAE/RMSE than SES, Theta, TBATS, ETS, ARIMA, and Prophet on competition datasets (Abdikhadir, 9 Jun 2025).
  • State-space and Bayesian TATS models forecast nonstationary meteorological temperature series and address missing data seamlessly (Hazra, 2015).
  • Locally adaptive shrinkage TATS approaches for count time series (NB-BTF) outperform penalized trend-filtering and classical state-space models for multiscale trend estimation and uncertainty quantification (Schafer et al., 2023).
  • DeepVARwT achieves superior long-range forecast accuracy relative to two-stage VAR+trend models and RNN-based alternatives, especially for nonstationary, multivariate economic time series (Li et al., 2022).

6. Interpretability, Implementation, and Limitations

TATS decompositions offer explicit representation of time-varying trend magnitude, seasonal behavior, and break dates, which facilitate interpretation in economic and scientific applications. Practitioners can trace regime changes to external events or interventions, directly assess seasonal cycles, and diagnose short-term error behavior through ARMA or state modeling.

Implementation strategies depend on data context:

  • High-dimensional series leverage efficient regression and canonical extraction (factor models), or deep learning with stability constraints.
  • Solution algorithms utilize dynamic programming for breaks, sparse matrix solvers for Bayesian state-space updates, XGBoost for trend classification, and MCMC for credible prediction intervals.
  • Adaptive shrinkage techniques furnish locally controlled smoothing and break detection for multiscale count data.

Limitations include the simplicity of some adjustment functions (e.g., constant offsets for trend disagreements (Kazemdehbashi, 19 Jan 2026)), possible underestimation of interval coverage due to omitted breakpoint uncertainty, and restricted empirical generalization to domains outside those directly tested (e.g., health care or sensor signals).

7. Extensions, Variants, and Current Research Directions

Current research explores extensions such as:

  • Colored noise and ARMA residual embedding within Kalman filters (Wang et al., 2019).
  • Integration of NLP features or LLMs for trend detection in financial and social domains (Kazemdehbashi, 19 Jan 2026).
  • Joint end-to-end training of trend classifiers and forecasters, anticipated as a direction for future work (Kazemdehbashi, 19 Jan 2026).
  • Bayesian modeling for trend and seasonality parameters with dynamic global-local priors for enhanced adaptivity (Schafer et al., 2023).

Collectively, TATS models synthesize deterministic and stochastic structure for robust, interpretable, and adaptive time series analysis across a wide range of contemporary research problems.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Trend-Adjusted Time Series (TATS) Model.