Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Concrete Dense Network for Long-Sequence Time Series Clustering (2405.05015v1)

Published 8 May 2024 in cs.LG

Abstract: Time series clustering is fundamental in data analysis for discovering temporal patterns. Despite recent advancements, learning cluster-friendly representations is still challenging, particularly with long and complex time series. Deep temporal clustering methods have been trying to integrate the canonical k-means into end-to-end training of neural networks but fall back on surrogate losses due to the non-differentiability of the hard cluster assignment, yielding sub-optimal solutions. In addition, the autoregressive strategy used in the state-of-the-art RNNs is subject to error accumulation and slow training, while recent research findings have revealed that Transformers are less effective due to time points lacking semantic meaning, to the permutation invariance of attention that discards the chronological order and high computation cost. In light of these observations, we present LoSTer which is a novel dense autoencoder architecture for the long-sequence time series clustering problem (LSTC) capable of optimizing the k-means objective via the Gumbel-softmax reparameterization trick and designed specifically for accurate and fast clustering of long time series. Extensive experiments on numerous benchmark datasets and two real-world applications prove the effectiveness of LoSTer over state-of-the-art RNNs and Transformer-based deep clustering methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. S. Aghabozorgi, A. S. Shirkhorshidi, and T. Y. Wah, “Time-series clustering–a decade review,” Information systems, vol. 53, pp. 16–38, 2015.
  2. O. Kobylin and V. Lyashenko, “Time series clustering based on the k-means algorithm,” Journal La Multiapp, vol. 1, no. 3, pp. 1–7, 2020.
  3. J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in International conference on machine learning.   PMLR, 2016, pp. 478–487.
  4. X. Guo, L. Gao, X. Liu, and J. Yin, “Improved deep embedded clustering with local structure preservation.” in Ijcai, vol. 17, 2017, pp. 1753–1759.
  5. N. S. Madiraju, “Deep temporal clustering: Fully unsupervised learning of time-domain features,” Ph.D. dissertation, Arizona State University, 2018.
  6. B. Gao, Y. Yang, H. Gouk, and T. M. Hospedales, “Deep clusteringwith concrete k-means,” in ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2020, pp. 4252–4256.
  7. Q. Ma, J. Zheng, S. Li, and G. W. Cottrell, “Learning representations for time series clustering,” Advances in neural information processing systems, vol. 32, 2019.
  8. S. Chang, Y. Zhang, W. Han, M. Yu, X. Guo, W. Tan, X. Cui, M. Witbrock, M. A. Hasegawa-Johnson, and T. S. Huang, “Dilated recurrent neural networks,” Advances in neural information processing systems, vol. 30, 2017.
  9. Y. Zhong, D. Huang, and C.-D. Wang, “Deep temporal contrastive clustering,” Neural Processing Letters, pp. 1–17, 2023.
  10. H. Zha, X. He, C. Ding, M. Gu, and H. Simon, “Spectral relaxation for k-means clustering,” Advances in neural information processing systems, vol. 14, 2001.
  11. E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” arXiv preprint arXiv:1611.01144, 2016.
  12. A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are transformers effective for time series forecasting?” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 9, 2023, pp. 11 121–11 128.
  13. Z. Li, S. Qi, Y. Li, and Z. Xu, “Revisiting long-term time series forecasting: An investigation on linear mapping,” arXiv preprint arXiv:2305.10721, 2023.
  14. F. Petitjean, A. Ketterlin, and P. Gançarski, “A global averaging method for dynamic time warping, with applications to clustering,” Pattern recognition, vol. 44, no. 3, pp. 678–693, 2011.
  15. J. Yang and J. Leskovec, “Patterns of temporal variation in online media,” in Proceedings of the fourth ACM international conference on Web search and data mining, 2011, pp. 177–186.
  16. E. Keogh, K. Chakrabarti, M. Pazzani, and S. Mehrotra, “Locally adaptive dimensionality reduction for indexing large time series databases,” in Proceedings of the 2001 ACM SIGMOD international conference on Management of data, 2001, pp. 151–162.
  17. C. Guo, H. Jia, and N. Zhang, “Time series clustering based on ica for stock data analysis,” in 2008 4th international conference on wireless communications, networking and mobile computing.   IEEE, 2008, pp. 1–4.
  18. Y. Li, P. Hu, Z. Liu, D. Peng, J. T. Zhou, and X. Peng, “Contrastive clustering,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 10, 2021, pp. 8547–8555.
  19. J. Li, P. Zhou, C. Xiong, and S. C. Hoi, “Prototypical contrastive learning of unsupervised representations,” arXiv preprint arXiv:2005.04966, 2020.
  20. X. Peng, K. Wang, Z. Zhu, M. Wang, and Y. You, “Crafting better contrastive views for siamese representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 031–16 040.
  21. T. Zhou, Z. Ma, Q. Wen, X. Wang, L. Sun, and R. Jin, “Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting,” in International conference on machine learning.   PMLR, 2022, pp. 27 268–27 286.
  22. M. Chen, H. Peng, J. Fu, and H. Ling, “Autoformer: Searching transformers for visual recognition,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 12 270–12 280.
  23. H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond efficient transformer for long sequence time-series forecasting,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 12, 2021, pp. 11 106–11 115.
  24. S. Liu, H. Yu, C. Liao, J. Li, W. Lin, A. X. Liu, and S. Dustdar, “Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting,” in International conference on learning representations, 2021.
  25. S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y.-X. Wang, and X. Yan, “Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting,” Advances in neural information processing systems, vol. 32, 2019.
  26. A. Trindade, “ElectricityLoadDiagrams20112014,” UCI Machine Learning Repository, 2015, DOI: https://doi.org/10.24432/C58C86.
  27. G. Lai, W.-C. Chang, Y. Yang, and H. Liu, “Modeling long-and short-term temporal patterns with deep neural networks,” in The 41st international ACM SIGIR conference on research & development in information retrieval, 2018, pp. 95–104.
  28. A. Das, W. Kong, A. Leach, R. Sen, and R. Yu, “Long-term forecasting with tide: Time-series dense encoder,” arXiv preprint arXiv:2304.08424, 2023.
  29. Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, “A time series is worth 64 words: Long-term forecasting with transformers,” arXiv preprint arXiv:2211.14730, 2022.
  30. D. Du, B. Su, and Z. Wei, “Preformer: predictive transformer with multi-scale segment-wise correlations for long-term time series forecasting,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  31. P. Chen, Y. ZHANG, Y. Cheng, Y. Shu, Y. Wang, Q. Wen, B. Yang, and C. Guo, “Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=lJkOCMP2aW
  32. T. T. Um, F. M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić, “Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks,” in Proceedings of the 19th ACM international conference on multimodal interaction, 2017, pp. 216–220.
  33. H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh, “The ucr time series archive,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1293–1305, 2019.
  34. S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “The m5 competition: Background, organization, and implementation,” International Journal of Forecasting, vol. 38, no. 4, pp. 1325–1336, 2022.
  35. K. M. Huy, “Store sales–time series forecasting in ecuador using kaggle,” Ph.D. dissertation, 2023.
  36. Q. Feng, L. Chen, C. P. Chen, and L. Guo, “Deep fuzzy clustering—a representation learning approach,” IEEE Transactions on Fuzzy Systems, vol. 28, no. 7, pp. 1420–1433, 2020.
  37. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  38. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  39. Y. Liu, T. Hu, H. Zhang, H. Wu, S. Wang, L. Ma, and M. Long, “itransformer: Inverted transformers are effective for time series forecasting,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=JePfAI8fah

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets