Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Patch Prediction: Adapting LLMs for Time Series Representation Learning (2402.04852v2)

Published 7 Feb 2024 in cs.LG

Abstract: In this study, we present aLLM4TS, an innovative framework that adapts LLMs for time-series representation learning. Central to our approach is that we reconceive time-series forecasting as a self-supervised, multi-patch prediction task, which, compared to traditional contrastive learning or mask-and-reconstruction methods, captures temporal dynamics in patch representations more effectively. Our strategy encompasses two-stage training: (i). a causal continual pre-training phase on various time-series datasets, anchored on next patch prediction, effectively syncing LLM capabilities with the intricacies of time-series data; (ii). fine-tuning for multi-patch prediction in the targeted time-series context. A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding. Such a design directly transposes individual patches into temporal sequences, thereby significantly bolstering the model's proficiency in mastering temporal patch-based representations. aLLM4TS demonstrates superior performance in several downstream tasks, proving its effectiveness in deriving temporal representations with enhanced transferability and marking a pivotal advancement in the adaptation of LLMs for time-series analysis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuxuan Bian (9 papers)
  2. Xuan Ju (19 papers)
  3. Jiangtong Li (24 papers)
  4. Zhijian Xu (15 papers)
  5. Dawei Cheng (38 papers)
  6. Qiang Xu (129 papers)
Citations (9)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets