Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SDE: A Simplified and Disentangled Dependency Encoding Framework for State Space Models in Time Series Forecasting (2408.12068v3)

Published 22 Aug 2024 in cs.LG

Abstract: In recent years, advancements in deep learning have spurred the development of numerous models for Long-term Time Series Forecasting (LTSF). However, most existing approaches struggle to fully capture the complex and structured dependencies inherent in time series data. In this work, we identify and formally define three critical dependencies that are fundamental to forecasting accuracy: order dependency and semantic dependency along the temporal dimension, as well as cross-variate dependency across the feature dimension. These dependencies are often treated in isolation, and improper handling can introduce noise and degrade forecasting performance. To bridge this gap, we investigate the potential of State Space Models (SSMs) for LTSF and emphasize their inherent advantages in capturing these essential dependencies. Additionally, we empirically observe that excessive nonlinearity in conventional SSMs introduce redundancy when applied to semantically sparse time series data. Motivated by this insight, we propose SDE (Simplified and Disentangled Dependency Encoding), a novel framework designed to enhance the capability of SSMs for LTSF. Specifically, we first eliminate unnecessary nonlinearities in vanilla SSMs, thereby improving the suitability for time series forecasting. Building on this foundation, we introduce a disentangled encoding strategy, which empowers SSMs to efficiently model cross-variate dependencies while mitigating interference between the temporal and feature dimensions. Furthermore, we provide rigorous theoretical justifications to substantiate our design choices. Extensive experiments on nine real-world benchmark datasets demonstrate that SDE-enhanced SSMs consistently outperform state-of-the-art time series forecasting models.Our code is available at https://github.com/YukinoAsuna/SAMBA.

Citations (2)

Summary

We haven't generated a summary for this paper yet.