Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting (2210.02199v1)

Published 4 Oct 2022 in cs.LG and cs.AI

Abstract: Large-scale self-supervised pre-training Transformer architecture have significantly boosted the performance for various tasks in NLP and computer vision (CV). However, there is a lack of researches on processing multivariate time-series by pre-trained Transformer, and especially, current study on masking time-series for self-supervised learning is still a gap. Different from language and image processing, the information density of time-series increases the difficulty of research. The challenge goes further with the invalidity of the previous patch embedding and mask methods. In this paper, according to the data characteristics of multivariate time-series, a patch embedding method is proposed, and we present an self-supervised pre-training approach based on Masked Autoencoders (MAE), called MTSMAE, which can improve the performance significantly over supervised learning without pre-training. Evaluating our method on several common multivariate time-series datasets from different fields and with different characteristics, experiment results demonstrate that the performance of our method is significantly better than the best method currently available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Peiwang Tang (4 papers)
  2. Xianchao Zhang (15 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.