Papers
Topics
Authors
Recent
Search
2000 character limit reached

Trend-Adjusted Time Series Models

Updated 10 February 2026
  • Trend-Adjusted Time Series (TATS) models are methodologies that explicitly decompose time series into trend, seasonal, and residual components to enhance forecasting and interpretability.
  • They integrate a range of techniques—from classical statistical methods to Bayesian and deep learning frameworks—to robustly estimate trends and handle regime shifts.
  • Empirical evaluations demonstrate that TATS models lower forecast errors and achieve tighter uncertainty quantification compared to traditional approaches in diverse applications.

A Trend-Adjusted Time Series (TATS) model is a broad class of statistical and machine-learning methodologies in which the nonstationary trend—or a more general slowly-varying component—of a time series is explicitly modeled, estimated, or incorporated, either as a latent process, a deterministic function, or an externally predicted signal. By decomposing the series into a trend component and one or more residual (often stationary or weakly stationary) components, TATS approaches aim to improve forecasting accuracy, interpretability, and the ability to characterize structural changes such as regime shifts or breaks. TATS models span the spectrum from classical statistical methods (state-space models, ARIMA/ARMA with deterministic trends) to modern Bayesian, machine learning, and deep learning frameworks, and encompass both continuous and count-valued time series.

1. Core Model Structure and Decomposition

Most TATS models share a basic decomposition, frequently instantiated as

Yt=Tt+St+Zt ,Y_t = T_t + S_t + Z_t\,,

where TtT_t is the trend (possibly stochastic or piecewise), StS_t a deterministic or stochastic seasonality, and ZtZ_t a residual or irregular process (serially uncorrelated or weakly stationary). The trend can be modeled in a variety of ways:

  • Deterministic polynomial or piecewise-linear trends: TtT_t as low-order polynomials, with breakpoints for regime changes (Abdikhadir, 9 Jun 2025, Gao et al., 2018).
  • Latent stochastic trends: State-space models with random walk or locally-adaptive priors on the trend increments (Hazra, 2015, Schafer et al., 2023).
  • Trends estimated jointly with the dependence structure: Deep learning models integrate trend extraction and residual modeling via LSTM or neural networks (Li et al., 2022).
  • Trend estimated/adjusted via external classifiers: Two-part frameworks where the trend direction is predicted and used to adjust forecaster output (Kazemdehbashi, 19 Jan 2026).

Seasonal and stationary effects may be modeled additively or multiplicatively, via trigonometric bases, moving averages, or via frequency-domain transformations.

2. Representative Methodologies

Several distinct TATS methodologies have emerged:

  • Bayesian trend filtering and shrinkage: Hierarchical Bayes models such as the Negative Binomial Bayesian Trend Filter (NB-BTF) for counts model the latent log-mean with adaptively regularized increments, capturing both local smoothness and abrupt breaks (Schafer et al., 2023). Dynamic global-local shrinkage (horseshoe priors) enable multiscale trend inference.
  • Decompose-and-learn neural forecasting: Recent architectures such as TDformer and ST-MTM exploit explicit trend-seasonal decomposition prior to forecasting. In TDformer, a bank of moving-average trend filters feed a small MLP for trend extrapolation, while the seasonal remainder is modeled with Fourier-domain attention (Zhang et al., 2022). In ST-MTM, masking and encoding are performed separately for trend and seasonal components, with a gating network learning the optimal fusion for final prediction (Seo et al., 13 Jun 2025).
  • State-space and synthetic control with trend-awareness: The Time-Aware Synthetic Control (TASC) model utilizes a linear-Gaussian state-space model with a constant latent trend, fit by EM via Kalman smoothing, to produce trend-following counterfactuals and uncertainty quantification (Rho et al., 6 Jan 2026).
  • High-dimensional structural deconstruction: In multivariate/space-time settings, structural-factor models decompose each series into trend (typically polynomial), seasonality (trigonometric), and a factor-structured irregular component, allowing for scalable BIC-based order selection and factor extraction (Gao et al., 2018).
  • Time-varying parameter Bayesian state-space models: Fully Bayesian frameworks specify the trend, seasonal, and noise terms as evolving stochastic processes, with time-varying coefficients estimated via efficient one-dimensional Gibbs sampling, enabling robust inference and missing data imputation (Hazra, 2015).

3. Trend Extraction, Adaptivity, and Break Handling

TATS frameworks differ in their adaptivity and capacity for regime change detection:

  • Piecewise-Linear and Structural Breaks: The LTSTA model optimizes the number and position of structural breaks in the trend via dynamic programming, ensuring continuity and re-estimation after removing seasonality and residuals (Abdikhadir, 9 Jun 2025). This is essential in economic applications (e.g., GDP under policy shocks).
  • Locally Adaptive Shrinkage: NB-BTF employs AR(1)-driven global-local shrinkage priors on trend differences, allowing most increments to be small (encouraging smoothness) but permitting sharp transient jumps, automatically identified via the posterior shrinkage proportion κt\kappa_t (Schafer et al., 2023).
  • Gating and Masking in Neural Models: Advanced neural TATS models (e.g., ST-MTM) use masking strategies and component-wise gating to focus learning on temporally localized trends, enhance robustness, and dynamically balance trend-seasonal influences (Seo et al., 13 Jun 2025).
  • Trend Information for Time Series Mining: TSAX represents trend at the symbolization level by augmenting the SAX encoding with per-segment trend directionality, enabling better discrimination of rising/falling motif structure in classification and motif search tasks (Fuad, 2021).

4. Estimation, Inference, and Computation

Parameter estimation and uncertainty quantification in TATS models vary by modeling paradigm:

  • Classical Methods: Polynomial/trigonometric decompositions are fit by OLS, with order selection via BIC, followed by factor estimation by PCA or CCA; consistency is established under increasing TT and pp (Gao et al., 2018).
  • Bayesian Computation: Posterior inference in TATS models often uses MCMC; NB-BTF uses PĂ³lya-Gamma data augmentation and blocked Gibbs/slice sampling for shrinkage parameters (Schafer et al., 2023). Hazra’s state-space model achieves computational tractability via one-dimensional Gibbs updates without matrix inversion (Hazra, 2015).
  • Expectation-Maximization: TASC’s EM algorithm alternates between Kalman smoothing/sufficient statistic computation and closed-form parameter updates for linear-Gaussian models (Rho et al., 6 Jan 2026).
  • Deep Learning: Joint gradient-based optimization is used for neural TATS models, with auto-differentiation and block updates for trend/VAR/distributional parameters. Stability constraints are enforced via the Ansley-Kohn transformation for deep VAR models (Li et al., 2022).
  • Mask-based Pretraining: Neural TATS approaches utilize masked autoencoding or masked time-series modeling with trend/seasonal-aware masking regimes to improve pretraining efficacy (Seo et al., 13 Jun 2025).

5. Empirical Evaluation, Advantages, and Limitations

Empirical results across domains consistently demonstrate the benefits of TATS approaches:

  • Forecasting Accuracy: In financial forecasting (e.g., gold price), TATS models that adjust base forecasts with trend-classifier outputs lower MSE and raise trend-detection accuracy compared to LSTM and Bi-LSTM baselines (Kazemdehbashi, 19 Jan 2026). In power outage count data, NB-BTF achieves tighter credible intervals and improved RMSE, especially for low-count regimes (Schafer et al., 2023). On benchmark datasets (M3, CIF 2016), TATS variants such as {L,S}GT and LTSTA outperform classical and contemporary baselines across MAE, RMSE, sMAPE, and MASE (Abdikhadir, 9 Jun 2025, Smyl et al., 2023).
  • Adaptivity and Interpretablity: Piecewise-linear and locally adaptive shrinkage variants provide interpretable recovery of structural breaks, regime-specific growth rates, and localized anomalies.
  • Robustness to Missing Data and Noise: Bayesian TATS models with fully probabilistic state-spaces natively propagate uncertainty and impute missing values in a coherent fashion (Hazra, 2015).
  • Computational Cost: MCMC-based and deep learning-based TATS models are typically more expensive than classical OLS/ARMA methods, with per-iteration or per-epoch costs scaling with series length and/or dimension (Schafer et al., 2023, Li et al., 2022). Sufficient burn-in, thinning, and hyperparameter tuning are critical for convergence and effective uncertainty quantification.
  • Limitations: Model mis-specification (e.g., inappropriate trend order or inadequate masking strategy), sensitivity to breakpoint/compression parameters, and potential over-correction in hybrid frameworks are common challenges.

6. Scope of Application and Future Directions

TATS methodology underpins a wide and growing set of applications:

  • Nonstationary financial and economic time series: Explicit trend-adjustment is crucial in forecasting/mapping regime changes due to exogenous shocks, policy interventions, or market cycles (Abdikhadir, 9 Jun 2025, Kazemdehbashi, 19 Jan 2026, Rho et al., 6 Jan 2026).
  • Energy, ecology, and count data modeling: Locally adaptive Bayesian TATS enable principled handling of discrete, sparse, and highly non-stationary series (Schafer et al., 2023).
  • Time series mining and data mining tasks: Trend-aware symbolic representations (TSAX) improve classification, motif discovery, cluster analysis, and similarity search especially for datasets where first-order dynamics are class-informative (Fuad, 2021).
  • Benchmark forecasting challenges: Bayesian and neural TATS models currently set the state of the art on M3, CIF 2016, and other public datasets (Smyl et al., 2023, Abdikhadir, 9 Jun 2025, Zhang et al., 2022).

A significant ongoing research direction is the unification of trend-adjusted paradigms with scalable, interpretable neural architectures and the extension to high-dimensional, heterogeneous, and non-Gaussian settings. A plausible implication is that trend adjustment—whether via explicit decomposition, locally adaptive shrinkage, or classifier-driven correction—will remain a foundational modeling principle for nonstationary time series.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Trend-Adjusted Time Series (TATS) Models.