Papers
Topics
Authors
Recent
Search
2000 character limit reached

TSLANet: Lightweight Adaptive Time Series Network

Updated 18 January 2026
  • TSLANet is a lightweight adaptive network that leverages convolutional and spectral processing blocks to analyze multivariate time series data efficiently.
  • The Adaptive Spectral Block uses FFT-based transformation with learnable thresholds to denoise and extract both global and local features for robust feature representation.
  • Self-supervised pretraining with masked autoencoding and fine-tuning losses in TSLANet mitigates overfitting and reduces computational complexity compared to Transformer models.

A Time Series Lightweight Adaptive Network (TSLANet) is a universal convolutional framework for multivariate time series analysis designed to efficiently capture both long- and short-range dependencies, while providing resilience to noise and achieving strong performance across classification, forecasting, and anomaly detection tasks. Directly addressing the inefficiencies and overfitting tendencies of Transformer-based models, TSLANet leverages spectral and convolutional processing blocks in tandem with self-supervised learning to create a scalable, robust, and lightweight alternative for time series representation learning (Eldele et al., 2024).

1. Architectural Overview

TSLANet processes an input multivariate time series SRC×LS \in \mathbb{R}^{C \times L} in three main stages: patch embedding, a series of NN stacked TSLANet layers, and a task-specific linear head. Each TSLANet layer sequentially applies two principal modules—the Adaptive Spectral Block (ASB) and the Interactive Convolution Block (ICB)—forming the core of its processing pipeline:

  • Patch Embedding: The input is segmented into MM patches, each embedded and summed with a learnable positional encoding.
  • Layer Sequence: Each layer receives the output of its predecessor, structured as:

InputASBICBOutput\text{Input} \rightarrow \text{ASB} \rightarrow \text{ICB} \rightarrow \text{Output}

  • Head: After NN layers, a linear head produces class logits, regression outputs (for forecasting), or anomaly score reconstructions.

2. Adaptive Spectral Block (ASB)

The Adaptive Spectral Block constitutes the spectral processing unit of TSLANet, targeting both denoising and efficient feature extraction:

  • Fourier Transform: Embedded patches x[n]RC×px[n] \in \mathbb{R}^{C \times p'} are transformed to the frequency domain via FFT:

X[k]=n=0N1x[n]ej2πkn/NX[k] = \sum_{n=0}^{N-1} x[n]\,e^{-j2\pi kn/N}

  • Adaptive Thresholding: Compute power spectrum P[k]=X[k]2P[k] = |X[k]|^2. Apply a binary mask M[k]=1{P[k]>θ}M[k] = \mathbf{1}_{\{P[k] > \theta\}}, where θ\theta is a learnable (potentially channelwise) threshold optimized by backpropagation. Frequencies below this power are zeroed: Xfilt[k]=X[k]M[k]X_\text{filt}[k] = X[k] \odot M[k].
  • Global/Local Spectral Filtering: Two learnable filters operate concurrently:

XG[k]=WGX[k]XL[k]=WLXfilt[k]X_G[k] = W_G \odot X[k] \qquad X_L[k] = W_L \odot X_\text{filt}[k]

where WG,WLCC×NW_G, W_L \in \mathbb{C}^{C \times N}. Their sum Xint[k]=XG[k]+XL[k]X_\mathrm{int}[k] = X_G[k] + X_L[k] aggregates global periodic and local denoised patterns.

  • Inverse FFT: Return to the time domain via IFFT, producing denoised and adaptively filtered representations:

x[n]=IFFT(Xint[k])x'[n] = \mathrm{IFFT}(X_\mathrm{int}[k])

The combination of learnable spectral masking and filtering distinguishes ASB, enhancing both global context capture and noise robustness.

3. Interactive Convolution Block (ICB)

Following ASB, the Interactive Convolution Block captures multi-scale temporal interactions via parallel convolutional pathways:

  • Parallel Convolutions: Two 1D convolutions—Conv1 (kernel size k1k_1) and Conv2 (kernel size k2k_2)—extract fine and coarse features, respectively. Outputs are modulated cross-scale:

A1=GELU(Conv1(x))Conv2(x) A2=GELU(Conv2(x))Conv1(x)\begin{align*} A_1 &= \mathrm{GELU}(\mathrm{Conv1}(x')) \odot \mathrm{Conv2}(x') \ A_2 &= \mathrm{GELU}(\mathrm{Conv2}(x')) \odot \mathrm{Conv1}(x') \end{align*}

  • Aggregation and Output: Summed activations are passed through a pointwise Conv3 for final block output:

OICB=Conv3(A1+A2)O_\text{ICB} = \mathrm{Conv3}(A_1 + A_2)

By promoting rich feature interactions across temporal scales, ICB supports robust pattern recognition in diverse time series contexts.

4. Self-Supervised Pretraining and Training Objectives

TSLANet employs dataset-specific self-supervised learning to enhance feature quality:

  • Masked Autoencoding: Random subsets of patches are masked, and the network reconstructs their raw signals. The mean-squared reconstruction loss,

LMSE=1MiMxipredxitrue2,\mathcal{L}_\mathrm{MSE} = \frac{1}{|\mathcal{M}|} \sum_{i \in \mathcal{M}} \| x_i^\mathrm{pred} - x_i^\mathrm{true} \|^2,

compels the model to attend to both global and local dependencies.

  • Fine-tuning Losses: For classification, label-smoothed cross-entropy is used:

LCE=c=1C[(1ϵ)yc+ϵC]logy^c\mathcal{L}_\mathrm{CE} = -\sum_{c=1}^C \left[(1-\epsilon) y_c + \frac{\epsilon}{C} \right] \log \hat{y}_c

This dual-stage training harnesses unlabeled data and stabilizes learning, particularly beneficial for small-data regimes.

5. Empirical Performance and Robustness

Extensive benchmarking demonstrates TSLANet's effectiveness across canonical time series tasks:

Task Benchmark Datasets Metric TSLANet Performance Notable Comparison
Classification UCR, UEA, Biomedical, HAR Accuracy (%) UCR: 83.18; UEA: 72.73; Bio: 90.24; HAR: 97.46 Outperforms ROCKET, TS-TCC, 2%+ over best
Forecasting ECL, ETTh1/2, ETTm1/2, Exchange, Traffic, Weather MSE, MAE 2nd lowest MSE in 7/8 tasks, 3% MSE↓ (ETT), 3.8%↓ (Weather) Beats PatchTST (select tasks)
Anomaly Detection SMD, MSL, SMAP, SWaT, PSM F1-score (%) 87.54 (avg, best), +0.82% over GPT4TS Highest F1, resilience to noise

TSLANet maintains accuracy within 5% of clean performance under Gaussian noise perturbations, outperforming Transformer-based models in robustness. On small datasets, such as uWaveGestureLibraryAll, it retains over 90% accuracy with just 20% of the training data, where comparison models suffer significant degradation.

6. Complexity, Scalability, and Ablation

TSLANet achieves O(LlogL)\mathcal{O}(L \log L) complexity in the spectral processing step, compared to the O(L2)\mathcal{O}(L^2) of Transformer self-attention. On UEA Heartbeat, it requires 93% fewer FLOPs and 84% fewer parameters than PatchTST, while achieving 77.56% accuracy (versus PatchTST’s 69.76%).

Ablation studies confirm component necessity:

Component Removed Impact (FordA accuracy) Impact (ETTh1 MSE)
ASB \downarrow 93.1 → 87.3% \uparrow 0.413 → 0.421
ASB-Local \downarrow 92.7% \uparrow 0.417
ICB \downarrow 91.3% \uparrow 0.419
Pretraining \downarrow 92.5% \uparrow 0.415

The ASB’s adaptive denoising and global context capture are determinative for accuracy and robustness.

7. Implementation Specifications

Key implementation parameters and training protocols:

  • Optimizer: AdamW
    • Classification: learning rate 1e31\mathrm{e}{-3}, weight decay 1e41\mathrm{e}{-4}, pretrain 50 epochs, fine-tune 100 epochs.
    • Forecasting/Anomaly: learning rate 1e41\mathrm{e}{-4}, weight decay 1e61\mathrm{e}{-6}, pretrain 10 epochs, fine-tune 20 epochs.
  • Batching: Overlap stride set to half the patch size.
  • Hardware: Model trained on NVIDIA RTX A6000.
  • Code: Publicly released at https://github.com/emadeldeen24/TSLANet.

TSLANet demonstrates a practical balance of accuracy, robustness, and efficiency by combining FFT-based adaptive spectral filtering, interactive convolutions, and masked autoencoder pretraining. This combination enables TSLANet to surpass state-of-the-art Transformer and MLP models across diverse time series tasks, validated through comprehensive empirical studies (Eldele et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Time Series Lightweight Adaptive Network (TSLANet).