Patch-Based Dual-Branch CTNet
- The paper introduces a dual-branch network that decouples intra-channel temporal evolution and inter-variable correlations, achieving state-of-the-art forecasting accuracy.
- It employs patch embedding, global inter-patch attention, and an adaptive frequency-domain correction to robustly handle non-stationary industrial data.
- Ablation studies confirm that removing any core component, particularly the dual-branch structure, significantly degrades performance on various benchmarks.
The Patch-Based Dual-Branch Channel-Temporal Forecasting Network (D-CTNet) is a neural architecture designed for accurate multivariate time series (MTS) forecasting, with specific applicability to collaborative industrial systems facing inter-variable complexity and non-stationary distribution shifts. D-CTNet introduces a modular, patch-based dual-branch approach that jointly decouples and learns intra-channel temporal evolution and inter-variable correlations, enhanced by a global attention fusion mechanism and an adaptive frequency-domain stationarity correction to suppress environmental distributional drift. The architecture attains state-of-the-art performance across seven standard benchmarks, achieving superior forecasting accuracy and robustness relative to contemporary baselines (Wang et al., 30 Nov 2025).
1. Architecture and Data Flow
D-CTNet ingests input tensors , where is batch size, is historical sequence length, and is the channel/variable dimension. Pre-normalization employs RevIN (Kim et al., 2021) as:
Patch Embedding
Patches of length are extracted with stride , yielding discrete patches.
- The input is reshaped into , processed with a 1D convolution into latent dimension , and augmented with learnable positional embeddings :
Dual-Branch Channel–Temporal Module
The central innovation is a dual-branch module, each operating on .
- Linear Temporal Branch (over patches for each channel and latent ): A learnable weight processes patches via GELU activation and residual-layernorm:
- Channel Attention Branch (multi-head attention over channels ): Channels are treated as the "sequence" dimension; queries, keys, and values use projections and standard MHA softmax attention, followed by Dropout and LayerNorm:
The two branches are fused additively:
Global Inter-Patch Attention Fusion
Applying multi-head attention along the patch dimension extends receptive field length, supporting long-range dependencies:
Forecast Head and Output Construction
- Feature correction applies adaptive frequency-domain correction (§3), then flattens to , projecting linearly to for forecast horizon .
- Reshaping and inverse normalization (RevIN "denorm") yield output .
2. Frequency-Domain Stationarity Correction
To address non-stationarity—crucial in industrial or environmental data under shifting regime—the penultimate feature representations undergo frequency-domain alignment. This mechanism computes FFTs for both and original patch inputs, calculates their power spectra, aligns autocorrelations, and scales features via an adaptive factor:
No auxiliary frequency-alignment loss is used; correction is in-line.
3. Mathematical Specification
All core operations are given with precise tensor semantics to support reimplementation.
- Temporal Branch:
- Channel Attention Branch:
- Global Inter-Patch Attention:
- Forecast Loss (MSE):
4. Optimization and Training Regime
Training employs the Adam optimizer, batch size 32, learning rate typically , no weight decay, for 30–50 epochs (dataset dependent). Only direct forecasting error (MSE) is optimized. The architecture is implemented to facilitate chronological splits for standard MTS datasets.
5. Experimental Evaluation and Results
D-CTNet is validated on seven public MTS forecasting datasets: ETTm1, ETTm2, ETTh1, ETTh2, Exchange-Rate, Electricity, and Weather, using chronological train/val/test splits (6:2:2 or 7:1:2), with mean squared error (MSE) and mean absolute error (MAE) as evaluation metrics. Performance is averaged over forecast horizons .
| Dataset | D-CTNet MSE | Best Baseline (MSE) |
|---|---|---|
| ETTm1 | 0.398 | PatchTST (0.402) |
| ETTm2 | 0.283 | RLinear (0.286) |
| ETTh1 | 0.359 | RLinear (0.446) |
| ETTh2 | 0.386 | PatchTST (0.684) |
| Electricity | 0.182 | MSGNet (0.194) |
| Weather | 0.247 | MSGNet (0.249) |
| Exchange | 0.347 | PatchTST (0.367) |
Full results, including per-horizon errors and MAE, are tabulated in the source.
6. Component Analysis and Ablative Insights
Comprehensive ablation studies demonstrate the impact of each architectural component:
- Removal of the dual-branch structure (DBCT) leads to the greatest performance drop (e.g., MSE increasing from 0.340 to 0.401 on ETTh1).
- Exclusion of the Global Patch Attention Fusion (GPAF) degrades performance (MSE to 0.386 on ETTh1).
- Omission of Frequency-Domain Stationarity Correction (FSC) results in moderate accuracy loss (MSE to 0.361 on ETTh1).
The dual-branch decoupling of temporal and channel-wise dependencies emerges as the most critical contributor to accuracy, with global inter-patch fusion supporting robust long-horizon extrapolation (notably at ), and frequency-domain correction particularly benefiting non-stationary datasets such as Exchange-Rate.
7. Context and Significance
D-CTNet systematically addresses key challenges in multivariate forecasting for collaborative industrial applications: (1) decoupling intra-variable dynamics and inter-variable correlations, (2) extending long-range dependency modeling, and (3) improving model robustness to non-stationary environmental shifts. The architecture is directly comparable to Transformer-based and linear patch-wise models, incorporating innovations such as dual-branch parallelism and spectrum alignment. Its empirical superiority across diverse and standard MTS benchmarks, together with detailed architectural and training specifications, make it a reproducible framework and candidate for practical deployment in forecasting-driven digital twin and industrial monitoring scenarios (Wang et al., 30 Nov 2025).