Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisited Large Language Model for Time Series Analysis through Modality Alignment (2410.12326v1)

Published 16 Oct 2024 in cs.LG

Abstract: LLMs have demonstrated impressive performance in many pivotal web applications such as sensor data analysis. However, since LLMs are not designed for time series tasks, simpler models like linear regressions can often achieve comparable performance with far less complexity. In this study, we perform extensive experiments to assess the effectiveness of applying LLMs to key time series tasks, including forecasting, classification, imputation, and anomaly detection. We compare the performance of LLMs against simpler baseline models, such as single-layer linear models and randomly initialized LLMs. Our results reveal that LLMs offer minimal advantages for these core time series tasks and may even distort the temporal structure of the data. In contrast, simpler models consistently outperform LLMs while requiring far fewer parameters. Furthermore, we analyze existing reprogramming techniques and show, through data manifold analysis, that these methods fail to effectively align time series data with language and display pseudo-alignment behaviour in embedding space. Our findings suggest that the performance of LLM-based methods in time series tasks arises from the intrinsic characteristics and structure of time series data, rather than any meaningful alignment with the LLM architecture.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Liangwei Nathan Zheng (5 papers)
  2. Chang George Dong (4 papers)
  3. Wei Emma Zhang (46 papers)
  4. Lin Yue (7 papers)
  5. Miao Xu (43 papers)
  6. Olaf Maennel (11 papers)
  7. Weitong Chen (27 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.