Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

STD-PLM: Understanding Both Spatial and Temporal Properties of Spatial-Temporal Data with PLM (2407.09096v3)

Published 12 Jul 2024 in cs.LG and cs.AI

Abstract: Spatial-temporal forecasting and imputation are important for real-world intelligent systems. Most existing methods are tailored for individual forecasting or imputation tasks but are not designed for both. Additionally, they are less effective for zero-shot and few-shot learning. While pre-trained LLM (PLM) have exhibited strong pattern recognition and reasoning abilities across various tasks, including few-shot and zero-shot learning, their applications in spatial-temporal data understanding has been constrained by insufficient modeling of complex correlations such as the temporal correlations, spatial connectivity, non-pairwise and high-order spatial-temporal correlations within data. In this paper, we propose STD-PLM for understanding both spatial and temporal properties of \underline{S}patial-\underline{T}emporal \underline{D}ata with \underline{PLM}, which is capable of implementing both spatial-temporal forecasting and imputation tasks. STD-PLM understands spatial-temporal correlations via explicitly designed spatial and temporal tokenizers. Topology-aware node embeddings are designed for PLM to comprehend and exploit the topology structure of data in inductive manner. Furthermore, to mitigate the efficiency issues introduced by the PLM, we design a sandglass attention module (SGA) combined with a specific constrained loss function, which significantly improves the model's efficiency while ensuring performance. Extensive experiments demonstrate that STD-PLM exhibits competitive performance and generalization capabilities across the forecasting and imputation tasks on various datasets. Moreover, STD-PLM achieves promising results on both few-shot and zero-shot tasks.The code is made available at \href{https://anonymous.4open.science/r/STD-PLM-F3BA}{https://anonymous.4open.science/r/STD-PLM-F3BA}

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiheng Huang (12 papers)
  2. Xiaowei Mao (8 papers)
  3. Shengnan Guo (20 papers)
  4. Yubin Chen (5 papers)
  5. Youfang Lin (52 papers)
  6. Huaiyu Wan (32 papers)
  7. Junfeng Shen (2 papers)
  8. Tiankuo Li (1 paper)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets