Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Can Large Language Models Understand Spatial-Temporal Data? (2401.14192v2)

Published 25 Jan 2024 in cs.LG and cs.CL

Abstract: While LLMs dominate tasks like natural language processing and computer vision, harnessing their power for spatial-temporal forecasting remains challenging. The disparity between sequential text and complex spatial-temporal data hinders this application. To address this issue, this paper introduces STG-LLM, an innovative approach empowering LLMs for spatial-temporal forecasting. We tackle the data mismatch by proposing: 1) STG-Tokenizer: This spatial-temporal graph tokenizer transforms intricate graph data into concise tokens capturing both spatial and temporal relationships; 2) STG-Adapter: This minimalistic adapter, consisting of linear encoding and decoding layers, bridges the gap between tokenized data and LLM comprehension. By fine-tuning only a small set of parameters, it can effectively grasp the semantics of tokens generated by STG-Tokenizer, while preserving the original natural language understanding capabilities of LLMs. Extensive experiments on diverse spatial-temporal benchmark datasets show that STG-LLM successfully unlocks LLM potential for spatial-temporal forecasting. Remarkably, our approach achieves competitive performance on par with dedicated SOTA methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Adaptive graph convolutional recurrent network for traffic forecasting. Advances in neural information processing systems, 33:17804–17815, 2020.
  2. Towards language models that can see: Computer vision through the lens of natural language. arXiv preprint arXiv:2306.16410, 2023.
  3. Llm4ts: Two-stage fine-tuning for time-series forecasting with pre-trained llms. arXiv preprint arXiv:2308.08469, 2023.
  4. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
  5. St-norm: Spatial and temporal normalization for multi-variate time series forecasting. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 269–278, 2021.
  6. Support vector regression machines. Advances in neural information processing systems, 9, 1996.
  7. Spatial-temporal graph ode networks for traffic flow forecasting. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 364–373, 2021.
  8. Large language models are zero-shot time series forecasters. arXiv preprint arXiv:2310.07820, 2023.
  9. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR, 2019.
  10. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505, 2023.
  11. Pdformer: Propagation delay-aware dynamic long-range transformer for traffic flow prediction. arXiv preprint arXiv:2301.07945, 2023.
  12. Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728, 2023.
  13. Dstagnn: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting. In International conference on machine learning, pages 11906–11917. PMLR, 2022.
  14. Spatial-temporal fusion graph neural networks for traffic flow forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 4189–4196, 2021.
  15. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926, 2017.
  16. Pretrained transformers as universal computation engines. arXiv preprint arXiv:2103.05247, 1, 2021.
  17. k-shape: Efficient and accurate clustering of time series. In Proceedings of the 2015 ACM SIGMOD international conference on management of data, pages 1855–1870, 2015.
  18. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  19. Fogs: First-order gradient supervision with learning-based graph for traffic flow forecasting. In Proceedings of International Joint Conference on Artificial Intelligence, IJCAI. ijcai. org, 2022.
  20. Erwin L Rimban. Challenges and limitations of chatgpt and other large language models. International Journal of Arts and Humanities, 4(1):147–152, 2023.
  21. Exploring progress in multivariate time series forecasting: Comprehensive benchmarking and heterogeneity analysis. arXiv preprint arXiv:2310.06119, 2023.
  22. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 914–921, 2020.
  23. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  24. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  25. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  26. Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121, 2019.
  27. Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering, 2023.
  28. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875, 2017.
  29. Is chatgpt fair for recommendation? evaluating fairness in large language model recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23. ACM, September 2023.
  30. Gimlet: A unified graph-text model for instruction-based molecule zero-shot learning. bioRxiv, pages 2023–05, 2023.
  31. One fits all: Power general time series analysis by pretrained lm. arXiv preprint arXiv:2302.11939, 2023.
  32. Vector autoregressive models for multivariate time series. Modeling financial time series with S-PLUS®, pages 385–429, 2006.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lei Liu (332 papers)
  2. Shuo Yu (35 papers)
  3. Runze Wang (25 papers)
  4. Zhenxun Ma (1 paper)
  5. Yanming Shen (17 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets