Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PT-Tuning: Bridging the Gap between Time Series Masked Reconstruction and Forecasting via Prompt Token Tuning (2311.03768v1)

Published 7 Nov 2023 in cs.LG and cs.AI

Abstract: Self-supervised learning has been actively studied in time series domain recently, especially for masked reconstruction. Most of these methods follow the "Pre-training + Fine-tuning" paradigm in which a new decoder replaces the pre-trained decoder to fit for a specific downstream task, leading to inconsistency of upstream and downstream tasks. In this paper, we first point out that the unification of task objectives and adaptation for task difficulty are critical for bridging the gap between time series masked reconstruction and forecasting. By reserving the pre-trained mask token during fine-tuning stage, the forecasting task can be taken as a special case of masked reconstruction, where the future values are masked and reconstructed based on history values. It guarantees the consistency of task objectives but there is still a gap in task difficulty. Because masked reconstruction can utilize contextual information while forecasting can only use historical information to reconstruct. To further mitigate the existed gap, we propose a simple yet effective prompt token tuning (PT-Tuning) paradigm, in which all pre-trained parameters are frozen and only a few trainable prompt tokens are added to extended mask tokens in element-wise manner. Extensive experiments on real-world datasets demonstrate the superiority of our proposed paradigm with state-of-the-art performance compared to representation learning and end-to-end supervised forecasting methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Hao Liu (497 papers)
  2. Jinrui Gan (1 paper)
  3. Xiaoxuan Fan (1 paper)
  4. Yi Zhang (994 papers)
  5. Chuanxian Luo (1 paper)
  6. Jing Zhang (731 papers)
  7. Guangxin Jiang (6 papers)
  8. Yucheng Qian (1 paper)
  9. Changwei Zhao (3 papers)
  10. Huan Ma (21 papers)
  11. Zhenyu Guo (21 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.