Language Models Still Struggle to Zero-shot Reason about Time Series (2404.11757v1)
Abstract: Time series are critical for decision-making in fields like finance and healthcare. Their importance has driven a recent influx of works passing time series into LLMs, leading to non-trivial forecasting on some datasets. But it remains unknown whether non-trivial forecasting implies that LLMs can reason about time series. To address this gap, we generate a first-of-its-kind evaluation framework for time series reasoning, including formal tasks and a corresponding dataset of multi-scale time series paired with text captions across ten domains. Using these data, we probe whether LLMs achieve three forms of reasoning: (1) Etiological Reasoning - given an input time series, can the LLM identify the scenario that most likely created it? (2) Question Answering - can a LLM answer factual questions about time series? (3) Context-Aided Forecasting - does highly relevant textual context improve a LLM's time series forecasts? We find that otherwise highly-capable LLMs demonstrate surprisingly limited time series reasoning: they score marginally above random on etiological and question answering tasks (up to 30 percentage points worse than humans) and show modest success in using context to improve forecasting. These weakness showcase that time series reasoning is an impactful, yet deeply underdeveloped direction for LLM research. We also make our datasets and code public at to support further research in this direction at https://github.com/behavioral-data/TSandLanguage
- Exploring the numerical reasoning capabilities of language models: A comprehensive analysis on tabular data. In EMNLP, 2023.
- Task-driven evaluation of aggregation in time series visualization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 551–560. ACM, 2014. ISBN 978-1-4503-2473-1. doi: 10.1145/2556288.2557200. URL https://dl.acm.org/doi/10.1145/2556288.2557200.
- Libra: A benchmark for time series forecasting methods. In ICPE, 2021.
- Modeling dynamics in time-series–cross-section political economy data. Annual review of political science, 14:331–352, 2011.
- Forecasting solar cycle 25 using deep neural networks. Solar Physics, 295(5):65, 2020.
- Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
- Tempo: Prompt-based generative pre-trained transformer for time series forecasting. In ICLR, 2024.
- Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
- The ucr time series classification archive, October 2018. https://www.cs.ucr.edu/~eamonn/time_series_data_2018/.
- Faith and Fate: Limits of Transformers on Compositionality. In NeurIPS, 2023.
- Ego-exo4d: Understanding skilled human activity from first- and third-person perspectives. arXiv preprint arXiv:2311.18259, 2023.
- Large language models are zero-shot time series forecasters. In NeurIPS, 2023.
- Do androids laugh at electric sheep? humor ”understanding” benchmarks from the new yorker caption contest. In ACL, 2023.
- Redefining wireless communication for 6g: Signal processing meets deep learning with deep unfolding. IEEE Transactions on Artificial Intelligence, 2(6):528–536, 2021.
- Time-llm: Time series forecasting by reprogramming large language models. In ICLR, 2024.
- Deep learning in agriculture: A survey. Computers and electronics in agriculture, 147:70–90, 2018.
- Time series as images: Vision transformer for irregularly sampled time series. In NeurIPS, 2023.
- Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a.
- Large language models are few-shot health learners. arXiv preprint arXiv:2305.15525, 2023b.
- Gemma. 2024. URL https://www.kaggle.com/m/3301.
- Time series prediction using deep learning methods in healthcare. ACM Trans. Manage. Inf. Syst., 14(1), 2023.
- Analysis of economic time series: a synthesis. Academic Press, 2014.
- Ecg-qa: A comprehensive question answering dataset combined with electrocardiogram. In NeurIPS, 2023.
- Robust speech recognition via large-scale weak supervision, 2022.
- Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Applied soft computing, 90:106181, 2020.
- Monash university, uea, ucr time series extrinsic regression archive. arXiv preprint arXiv:2006.10996, 2020.
- Prompt-based domain discrimination for multi-source time series domain adaptation. arXiv preprint arXiv:2312.12276, 2023a.
- Recode: Robustness evaluation of code generation models. In ACL, 2023b.
- Filling the image information gap for vqa: Prompting large language models to proactively ask questions. In EMNLP, 2023c.
- The generative ai paradox:” what it can create, it may not understand”. In ICLR, 2024.
- Timesnet: Temporal 2d-variation modeling for general time series analysis. In ICLR, 2023.
- Pixiu: A comprehensive benchmark, instruction dataset and large language model for finance. In NeurIPS, 2023.
- Deepsqa: Understanding sensor data via question answering. In IoTDI, 2021.
- Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering, pp. 1–14, 2023.
- Large language models for time series: A survey. arXiv preprint arXiv:2402.01801, 2024.
- Li Zhong and Zilong Wang. A study on robustness and reliability of large language model code generation. arXiv preprint arXiv:2308.10335, 2023.
- One fits all: Power general time series analysis by pretrained lm. In NeurIPS, 2023.