Weakly Augmented Variational Autoencoder in Time Series Anomaly Detection (2401.03341v1)
Abstract: Due to their unsupervised training and uncertainty estimation, deep Variational Autoencoders (VAEs) have become powerful tools for reconstruction-based Time Series Anomaly Detection (TSAD). Existing VAE-based TSAD methods, either statistical or deep, tune meta-priors to estimate the likelihood probability for effectively capturing spatiotemporal dependencies in the data. However, these methods confront the challenge of inherent data scarcity, which is often the case in anomaly detection tasks. Such scarcity easily leads to latent holes, discontinuous regions in latent space, resulting in non-robust reconstructions on these discontinuous spaces. We propose a novel generative framework that combines VAEs with self-supervised learning (SSL) to address this issue.
- D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
- A. Graves, R. K. Srivastava, T. Atkinson, and F. Gomez, “Bayesian flow networks,” arXiv preprint arXiv:2308.07037, 2023.
- A. Vahdat and J. Kautz, “Nvae: A deep hierarchical variational autoencoder,” Advances in neural information processing systems, vol. 33, pp. 19 667–19 679, 2020.
- Y. Takida, T. Shibuya, W. Liao, C.-H. Lai, J. Ohmura, T. Uesaka, N. Murata, S. Takahashi, T. Kumakura, and Y. Mitsufuji, “Sq-vae: Variational bayes on discrete representation with self-annealed stochastic quantization,” arXiv preprint arXiv:2205.07547, 2022.
- L. Manduchi, M. Vandenhirtz, A. Ryser, and J. Vogt, “Tree variational autoencoders,” arXiv preprint arXiv:2306.08984, 2023.
- A. Razavi, A. Van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” Advances in neural information processing systems, vol. 32, 2019.
- Z. Wu, L. Cao, and L. Qi, “evae: Evolutionary variational autoencoder,” arXiv preprint arXiv:2301.00011, 2023.
- L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, “Modeling tabular data using conditional gan,” Advances in neural information processing systems, vol. 32, 2019.
- A. Kotelnikov, D. Baranchuk, I. Rubachev, and A. Babenko, “Tabddpm: Modelling tabular data with diffusion models,” in International Conference on Machine Learning. PMLR, 2023, pp. 17 564–17 579.
- H. Zhu, C. Balsells-Rodas, and Y. Li, “Markovian gaussian process variational autoencoders,” in International Conference on Machine Learning. PMLR, 2023, pp. 42 938–42 961.
- X.-B. Jin, W.-T. Gong, J.-L. Kong, Y.-T. Bai, and T.-L. Su, “Pfvae: a planar flow-based variational auto-encoder prediction model for time series data,” Mathematics, vol. 10, no. 4, p. 610, 2022.
- X. Liu, J. Yuan, B. An, Y. Xu, Y. Yang, and F. Huang, “C-disentanglement: Discovering causally-independent generative factors under an inductive bias of confounder,” arXiv preprint arXiv:2310.17325, 2023.
- Z. Wu and L. Cao, “C2vae: Gaussian copula-based vae differing disentangled from coupled representations with contrastive posterior,” arXiv preprint arXiv:2309.13303, 2023.
- T. Z. Xiao and R. Bamler, “Trading information between latents in hierarchical variational autoencoders,” arXiv preprint arXiv:2302.04855, 2023.
- S. Tonekaboni, C.-L. Li, S. O. Arik, A. Goldenberg, and T. Pfister, “Decoupling local and global representations of time series,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 8700–8714.
- M. Tschannen, O. Bachem, and M. Lucic, “Recent advances in autoencoder-based representation learning,” arXiv preprint arXiv:1812.05069, 2018.
- J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” Advances in neural information processing systems, vol. 28, 2015.
- D. Park, Y. Hoshi, and C. C. Kemp, “A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1544–1551, 2018.
- Y. Su, Y. Zhao, C. Niu, R. Liu, W. Sun, and D. Pei, “Robust anomaly detection for multivariate time series through stochastic recurrent neural network,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 2828–2837.
- T. Kieu, B. Yang, C. Guo, R.-G. Cirstea, Y. Zhao, Y. Song, and C. S. Jensen, “Anomaly detection in time series with robust variational quasi-recurrent autoencoders,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 1342–1354.
- C.-Y. Lai, F.-K. Sun, Z. Gao, J. H. Lang, and D. S. Boning, “Nominality score conditioned time series anomaly detection by point/sequential reconstruction,” arXiv preprint arXiv:2310.15416, 2023.
- Y. Li, W. Chen, B. Chen, D. Wang, L. Tian, and M. Zhou, “Prototype-oriented unsupervised anomaly detection for multivariate time series,” in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 2023, pp. 19 407–19 424. [Online]. Available: https://proceedings.mlr.press/v202/li23d.html
- A. Khan and A. Storkey, “Adversarial robustness of vaes through the lens of local geometry,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2023, pp. 8954–8967.
- C. Zhou and R. C. Paffenroth, “Anomaly detection with robust deep autoencoders,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 665–674.
- H. Akrami, A. A. Joshi, J. Li, S. Aydore, and R. M. Leahy, “Robust variational autoencoder,” arXiv preprint arXiv:1905.09961, 2019.
- S. Eduardo, A. Nazábal, C. K. Williams, and C. Sutton, “Robust variational autoencoders for outlier detection and repair of mixed-type data,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 4056–4066.
- S. Cao, J. Li, K. P. Nelson, and M. A. Kon, “Coupled vae: Improved accuracy and robustness of a variational autoencoder,” Entropy, vol. 24, no. 3, p. 423, 2022.
- K. Zhang, Q. Wen, C. Zhang, R. Cai, M. Jin, Y. Liu, J. Zhang, Y. Liang, G. Pang, D. Song et al., “Self-supervised learning for time series analysis: Taxonomy, progress, and prospects,” arXiv preprint arXiv:2306.10125, 2023.
- J. Chen, S. Sathe, C. Aggarwal, and D. Turaga, “Outlier detection with autoencoder ensembles,” in Proceedings of the 2017 SIAM international conference on data mining. SIAM, 2017, pp. 90–98.
- T. Kieu, B. Yang, and C. S. Jensen, “Outlier detection for multidimensional time series using deep neural networks,” in 2018 19th IEEE international conference on mobile data management (MDM). IEEE, 2018, pp. 125–134.
- B. Zhou, S. Liu, B. Hooi, X. Cheng, and J. Ye, “Beatgan: Anomalous rhythm detection using adversarially generated time series.” in IJCAI, vol. 2019, 2019, pp. 4433–4439.
- W. Liao, Y. Guo, X. Chen, and P. Li, “A unified unsupervised gaussian mixture variational autoencoder for high dimensional outlier detection,” in 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018, pp. 1208–1217.
- H. Xu, W. Chen, N. Zhao, Z. Li, J. Bu, Z. Li, Y. Liu, Y. Zhao, D. Pei, Y. Feng et al., “Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications,” in Proceedings of the 2018 world wide web conference, 2018, pp. 187–196.
- S. Zhao, J. Song, and S. Ermon, “Infovae: Information maximizing variational autoencoders,” arXiv preprint arXiv:1706.02262, 2017.
- G. Woo, C. Liu, D. Sahoo, A. Kumar, and S. Hoi, “Cost: Contrastive learning of disentangled seasonal-trend representations for time series forecasting,” arXiv preprint arXiv:2202.01575, 2022.
- M. Hou, C. Xu, Z. Li, Y. Liu, W. Liu, E. Chen, and J. Bian, “Multi-granularity residual learning with confidence estimation for time series prediction,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 112–121.
- H. Lee, E. Seong, and D.-K. Chae, “Self-supervised learning with attention-based latent signal augmentation for sleep staging with limited labeled data,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, LD Raedt, Ed. International Joint Conferences on Artificial Intelligence Organization, vol. 7, 2022, pp. 3868–3876.
- J. Xu, H. Wu, J. Wang, and M. Long, “Anomaly transformer: Time series anomaly detection with association discrepancy,” arXiv preprint arXiv:2110.02642, 2021.
- Z. Yue, Y. Wang, J. Duan, T. Yang, C. Huang, Y. Tong, and B. Xu, “Ts2vec: Towards universal representation of time series,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8980–8987.
- L. Yang and S. Hong, “Unsupervised time-series representation learning with iterative bilinear temporal-spectral fusion,” in International Conference on Machine Learning. PMLR, 2022, pp. 25 038–25 054.
- Z. Wang, X. Xu, W. Zhang, G. Trajcevski, T. Zhong, and F. Zhou, “Learning latent seasonal-trend representations for time series forecasting,” Advances in Neural Information Processing Systems, vol. 35, pp. 38 775–38 787, 2022.
- W. Chen, L. Tian, B. Chen, L. Dai, Z. Duan, and M. Zhou, “Deep variational graph convolutional recurrent network for multivariate time series anomaly detection,” in International Conference on Machine Learning. PMLR, 2022, pp. 3621–3633.
- S. N. Shukla and B. M. Marlin, “Heteroscedastic temporal variational autoencoder for irregularly sampled time series,” arXiv preprint arXiv:2107.11350, 2021.
- W. Zhang, C. Zhang, and F. Tsung, “Grelen: Multivariate time series anomaly detection from the perspective of graph relational learning,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, vol. 7, 2022, pp. 2390–2397.
- Y. Li, X. Lu, Y. Wang, and D. Dou, “Generative time series forecasting with diffusion, denoise, and disentanglement,” Advances in Neural Information Processing Systems, vol. 35, pp. 23 009–23 022, 2022.
- J. M. L. Alcaraz and N. Strodthoff, “Diffusion-based time series imputation and forecasting with structured state space models,” arXiv preprint arXiv:2208.09399, 2022.
- H. Wen, Y. Lin, Y. Xia, H. Wan, R. Zimmermann, and Y. Liang, “Diffstg: Probabilistic spatio-temporal graph forecasting with denoising diffusion models,” arXiv preprint arXiv:2301.13629, 2023.
- P. Cheng, W. Hao, S. Dai, J. Liu, Z. Gan, and L. Carin, “Club: A contrastive log-ratio upper bound of mutual information,” in International conference on machine learning. PMLR, 2020, pp. 1779–1788.
- H. Meng, Y. Zhang, Y. Li, and H. Zhao, “Spacecraft anomaly detection via transformer reconstruction error,” in Proceedings of the International Conference on Aerospace System Science and Engineering 2019. Springer, 2020, pp. 351–362.
- P. Malhotra, A. Ramakrishnan, G. Anand, L. Vig, P. Agarwal, and G. Shroff, “Lstm-based encoder-decoder for multi-sensor anomaly detection,” arXiv preprint arXiv:1607.00148, 2016.
- C. Zhang, D. Song, Y. Chen, X. Feng, C. Lumezanu, W. Cheng, J. Ni, B. Zong, H. Chen, and N. V. Chawla, “A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 1409–1416.
- Y. Chen, Y. Hao, T. Rakthanmanon, J. Zakaria, B. Hu, and E. n. Keogh, “A general framework for never-ending learning from time series streams,” Data mining and knowledge discovery, vol. 29, pp. 1622–1664, 2015.
- F. Futami, I. Sato, and M. Sugiyama, “Variational inference based on robust divergences,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2018, pp. 813–822.