Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey of Generative Techniques for Spatial-Temporal Data Mining (2405.09592v1)

Published 15 May 2024 in cs.LG, cs.AI, and cs.CE

Abstract: This paper focuses on the integration of generative techniques into spatial-temporal data mining, considering the significant growth and diverse nature of spatial-temporal data. With the advancements in RNNs, CNNs, and other non-generative techniques, researchers have explored their application in capturing temporal and spatial dependencies within spatial-temporal data. However, the emergence of generative techniques such as LLMs, SSL, Seq2Seq and diffusion models has opened up new possibilities for enhancing spatial-temporal data mining further. The paper provides a comprehensive analysis of generative technique-based spatial-temporal methods and introduces a standardized framework specifically designed for the spatial-temporal data mining pipeline. By offering a detailed review and a novel taxonomy of spatial-temporal methodology utilizing generative techniques, the paper enables a deeper understanding of the various techniques employed in this field. Furthermore, the paper highlights promising future research directions, urging researchers to delve deeper into spatial-temporal data mining. It emphasizes the need to explore untapped opportunities and push the boundaries of knowledge to unlock new insights and improve the effectiveness and efficiency of spatial-temporal data mining. By integrating generative techniques and providing a standardized framework, the paper contributes to advancing the field and encourages researchers to explore the vast potential of generative techniques in spatial-temporal data mining.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (235)
  1. Z. Engin, J. van Dijk, T. Lan, P. A. Longley, P. Treleaven, M. Batty, and A. Penn, “Data-driven urban management: Mapping the landscape,” Journal of Urban Management, vol. 9, no. 2, pp. 140–150, 2020.
  2. K. Button, “City management and urban environmental indicators,” Ecological economics, vol. 40, no. 2, pp. 217–233, 2002.
  3. Z. Liu, J. Li, and K. Wu, “Context-aware taxi dispatching at city-scale using deep reinforcement learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 3, pp. 1996–2009, 2020.
  4. D.-H. Lee, H. Wang, R. L. Cheu, and S. H. Teo, “Taxi dispatch system based on current demands and real-time traffic conditions,” Transportation Research Record, vol. 1882, no. 1, pp. 193–200, 2004.
  5. M. Kampa and E. Castanas, “Human health effects of air pollution,” Environmental pollution, vol. 151, no. 2, pp. 362–367, 2008.
  6. P. W. Abrahams, “Soils: their implications to human health,” Science of the Total Environment, vol. 291, no. 1-3, pp. 1–32, 2002.
  7. K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, “Pangu-weather: A 3d high-resolution model for fast and accurate global weather forecast,” arXiv preprint arXiv:2211.02556, 2022.
  8. M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
  9. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  10. Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” in ICLR.   OpenReview.net, 2018.
  11. B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” in International Joint Conferences on Artificial Intelligence (IJCAI).   ijcai.org, 2018, pp. 3634–3640.
  12. Q. Zhang, Z. Wang, C. Long, C. Huang, S.-M. Yiu, Y. Liu, G. Cong, and J. Shi, “Online anomalous subtrajectory detection on road networks with deep reinforcement learning,” in 2023 IEEE 39th International Conference on Data Engineering (ICDE).   IEEE, 2023, pp. 246–258.
  13. K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” International Journal of Computer Vision, vol. 130, no. 9, pp. 2337–2348, 2022.
  14. B. X. Yu, J. Chang, H. Wang, L. Liu, S. Wang, Z. Wang, J. Lin, L. Xie, H. Li, Z. Lin et al., “Visual tuning,” ACM Computing Surveys, 2023.
  15. J. Duan, H. Zhao, Q. Zhou, M. Qiu, and M. Liu, “A study of pre-trained language models in natural language processing,” in 2020 IEEE International Conference on Smart Cloud (SmartCloud).   IEEE, 2020, pp. 116–121.
  16. C. Liu, S. Yang, Q. Xu, Z. Li, C. Long, Z. Li, and R. Zhao, “Spatial-temporal large language model for traffic prediction,” arXiv preprint arXiv:2401.10134, 2024.
  17. Q. Zhang, C. Huang, L. Xia, Z. Wang, Z. Li, and S. Yiu, “Automated spatio-temporal graph contrastive learning,” in WWW, 2023, pp. 295–305.
  18. G. Atluri, A. Karpatne, and V. Kumar, “Spatio-temporal data mining: A survey of problems and methods,” ACM Computing Surveys (CSUR), vol. 51, no. 4, pp. 1–41, 2018.
  19. S. Wang, J. Cao, and S. Y. Philip, “Deep learning for spatio-temporal data mining: A survey,” IEEE transactions on knowledge and data engineering, vol. 34, no. 8, pp. 3681–3700, 2020.
  20. R. Geetha, N. Sumathi, S. Sathiyabama, and T. T. Tiruchengode, “A survey of spatial, temporal and spatio-temporal data mining,” journal of computer applications, vol. 1, no. 4, pp. 31–33, 2008.
  21. W. Qunyong, S. Mei, and C. Lei, “A survey of the spatio-temporal data model,” Advances in Earth Science, vol. 31, no. 10, p. 1001, 2016.
  22. A. R. Mahmood, S. Punni, and W. G. Aref, “Spatio-temporal access methods: a survey (2010-2017),” GeoInformatica, vol. 23, pp. 1–36, 2019.
  23. A. Hamdi, K. Shaban, A. Erradi, A. Mohamed, S. K. Rumi, and F. D. Salim, “Spatiotemporal data mining: a survey on challenges and open problems,” Artificial Intelligence Review, pp. 1–48, 2022.
  24. A. Sharma, Z. Jiang, and S. Shekhar, “Spatiotemporal data mining: A survey,” arXiv preprint arXiv:2206.12753, 2022.
  25. G. Jin, Y. Liang, Y. Fang, J. Huang, J. Zhang, and Y. Zheng, “Spatio-temporal graph neural networks for predictive learning in urban computing: A survey,” arXiv preprint arXiv:2303.14483, 2023.
  26. X. Luo, C. Zhu, D. Zhang, and Q. Li, “Stg4traffic: A survey and benchmark of spatial-temporal graph neural networks for traffic prediction,” arXiv preprint arXiv:2307.00495, 2023.
  27. M. M. Alam, L. Torgo, and A. Bifet, “A survey on spatio-temporal data analytics systems,” ACM Computing Surveys, vol. 54, no. 10s, pp. 1–38, 2022.
  28. N. Pant, M. Fouladgar, R. Elmasri, and K. Jitkajornwanich, “A survey of spatio-temporal database research,” in Intelligent Information and Database Systems: 10th Asian Conference, ACIIDS 2018, Dong Hoi City, Vietnam, March 19-21, 2018, Proceedings, Part II 10.   Springer, 2018, pp. 115–126.
  29. T. Abraham and J. F. Roddick, “Survey of spatio-temporal databases,” GeoInformatica, vol. 3, pp. 61–99, 1999.
  30. K. Jitkajornwanich, N. Pant, M. Fouladgar, and R. Elmasri, “A survey on spatial, temporal, and spatio-temporal database research and an original example of relevant applications using sql ecosystem and deep learning,” Journal of information and telecommunication, vol. 4, no. 4, pp. 524–559, 2020.
  31. D. Zhu, J. Chen, X. Shen, X. Li, and M. Elhoseiny, “Minigpt-4: Enhancing vision-language understanding with advanced large language models,” arXiv preprint arXiv:2304.10592, 2023.
  32. K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Conditional prompt learning for vision-language models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 16 816–16 825.
  33. Y. Zhao, I. Misra, P. Krähenbühl, and R. Girdhar, “Learning video representations from large language models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 6586–6597.
  34. H. Wang, X. Yang, J. Chang, D. Jin, J. Sun, S. Zhang, X. Luo, and Q. Tian, “Parameter-efficient tuning of large-scale multimodal foundation model,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  35. L. Mathew and V. Bindu, “A review of natural language processing techniques for sentiment analysis using pre-trained models,” in 2020 Fourth international conference on computing methodologies and communication (ICCMC).   IEEE, 2020, pp. 340–345.
  36. Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon, “Domain-specific language model pretraining for biomedical natural language processing,” ACM Transactions on Computing for Healthcare (HEALTH), vol. 3, no. 1, pp. 1–23, 2021.
  37. L. Jiang, Y. Wu, J. Xiong, J. Ruan, Y. Ding, Q. Guo, Z. Wen, J. Zhou, and X. Deng, “Hummer: Towards limited competitive preference dataset,” arXiv preprint, 2024.
  38. H. Sun, Z. Chen, X. Yang, Y. Tian, and B. Chen, “Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding,” arXiv preprint arXiv:2404.11912, 2024.
  39. K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang, “Transformer in transformer,” Advances in neural information processing systems, vol. 34, pp. 15 908–15 919, 2021.
  40. S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu, “Unifying large language models and knowledge graphs: A roadmap,” arXiv preprint arXiv:2306.08302, 2023.
  41. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  42. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  43. Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
  44. H. Zhang, H. Song, S. Li, M. Zhou, and D. Song, “A survey of controllable text generation using transformer-based pre-trained language models,” ACM Computing Surveys, 2022.
  45. B. Settles, “Biomedical named entity recognition using conditional random fields and rich feature sets,” in Proceedings of the international joint workshop on natural language processing in biomedicine and its applications (NLPBA/BioNLP), 2004, pp. 107–110.
  46. L. Yao, C. Mao, and Y. Luo, “Kg-bert: Bert for knowledge graph completion,” arXiv preprint arXiv:1909.03193, 2019.
  47. K. Hakala and S. Pyysalo, “Biomedical named entity recognition with multilingual bert,” in Proceedings of the 5th workshop on BioNLP open shared tasks, 2019, pp. 56–61.
  48. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020.
  49. A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang et al., “Glm-130b: An open bilingual pre-trained model,” in International Conference on Learning Representations, 2023.
  50. B. Zoph, I. Bello, S. Kumar, N. Du, Y. Huang, J. Dean, N. Shazeer, and W. Fedus, “St-moe: Designing stable and transferable sparse expert models,” arXiv preprint arXiv:2202.08906, 2022.
  51. L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel, “mt5: A massively multilingual pre-trained text-to-text transformer,” arXiv preprint arXiv:2010.11934, 2020.
  52. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  53. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 730–27 744, 2022.
  54. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  55. E. Jang, C. Devin, V. Vanhoucke, and S. Levine, “Grasp2vec: Learning object representations from self-supervised grasping,” arXiv preprint arXiv:1811.06964, 2018.
  56. P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain, “Time-contrastive networks: Self-supervised learning from video,” in 2018 IEEE international conference on robotics and automation (ICRA).   IEEE, 2018, pp. 1134–1141.
  57. F. Ebert, S. Dasari, A. X. Lee, S. Levine, and C. Finn, “Robustness via retrying: Closed-loop robotic manipulation with self-supervised learning,” in Conference on robot learning.   PMLR, 2018, pp. 983–993.
  58. A. Owens and A. A. Efros, “Audio-visual scene analysis with self-supervised multisensory features,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 631–648.
  59. N. Sayed, B. Brattoli, and B. Ommer, “Cross and learn: Cross-modal self-supervision,” in Pattern Recognition: 40th German Conference, GCPR 2018, Stuttgart, Germany, October 9-12, 2018, Proceedings 40.   Springer, 2019, pp. 228–243.
  60. X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” IEEE transactions on knowledge and data engineering, vol. 35, no. 1, pp. 857–876, 2021.
  61. S. Becker and G. E. Hinton, “Self-organizing neural network that discovers surfaces in random-dot stereograms,” Nature, vol. 355, no. 6356, pp. 161–163, 1992.
  62. Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3733–3742.
  63. M. Ye, X. Zhang, P. C. Yuen, and S.-F. Chang, “Unsupervised embedding learning via invariant and spreading instance feature,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6210–6219.
  64. X. Ji, J. F. Henriques, and A. Vedaldi, “Invariant information clustering for unsupervised image classification and segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9865–9874.
  65. M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, “Simple and deep graph convolutional networks,” in ICML.   PMLR, 2020, pp. 1725–1735.
  66. T. N. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308.
  67. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
  68. L. F. Ribeiro, P. H. Saverese, and D. R. Figueiredo, “struc2vec: Learning node representations from structural identity,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 385–394.
  69. M. I. Belghazi, A. Baratin, S. Rajeswar, S. Ozair, Y. Bengio, A. Courville, and R. D. Hjelm, “Mine: mutual information neural estimation,” arXiv preprint arXiv:1801.04062, 2018.
  70. R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, “Learning deep representations by mutual information estimation and maximization,” arXiv preprint arXiv:1808.06670, 2018.
  71. F.-Y. Sun, J. Hoffmann, V. Verma, and J. Tang, “Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization,” arXiv preprint arXiv:1908.01000, 2019.
  72. Z. Peng, Y. Dong, M. Luo, X.-M. Wu, and Q. Zheng, “Self-supervised graph representation learning via global context prediction,” arXiv preprint arXiv:2003.01604, 2020.
  73. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” Advances in neural information processing systems, vol. 29, 2016.
  74. I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework.” ICLR (Poster), vol. 3, 2017.
  75. A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
  76. M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 132–149.
  77. I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” Advances in neural information processing systems, vol. 27, 2014.
  78. D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
  79. R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang et al., “Abstractive text summarization using sequence-to-sequence rnns and beyond,” arXiv preprint arXiv:1602.06023, 2016.
  80. R. J. Weiss, J. Chorowski, N. Jaitly, Y. Wu, and Z. Chen, “Sequence-to-sequence models can directly translate foreign speech,” arXiv preprint arXiv:1703.08581, 2017.
  81. X. Liu, X. Tan, Y. Guo, Y. Chen, and Z. Zhang, “Cstrm: Contrastive self-supervised trajectory representation model for trajectory similarity computation,” Computer Communications, vol. 185, pp. 159–167, 2022.
  82. Y. Yan, H. Wen, S. Zhong, W. Chen, H. Chen, Q. Wen, R. Zimmermann, and Y. Liang, “When urban region profiling meets large language models,” arXiv preprint arXiv:2310.18340, 2023.
  83. R. Li, T. Zhong, X. Jiang, G. Trajcevski, J. Wu, and F. Zhou, “Mining spatio-temporal relations via self-paced graph contrastive learning,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 936–944.
  84. Y. Ren, Y. Chen, S. Liu, B. Wang, H. Yu, and Z. Cui, “Tpllm: A traffic prediction framework based on pretrained large language models,” arXiv preprint arXiv:2403.02221, 2024.
  85. H. Xue and F. D. Salim, “Promptcast: A new prompt-based learning paradigm for time series forecasting,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  86. X. Yu, Z. Chen, Y. Ling, S. Dong, Z. Liu, and Y. Lu, “Temporal data meets llm–explainable financial time series forecasting,” arXiv preprint arXiv:2306.11025, 2023.
  87. T. Zhou, P. Niu, L. Sun, R. Jin et al., “One fits all: Power general time series analysis by pretrained lm,” Advances in neural information processing systems, vol. 36, 2024.
  88. N. Gruver, M. Finzi, S. Qiu, and A. G. Wilson, “Large language models are zero-shot time series forecasters,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  89. Y. Yuan, J. Ding, J. Feng, D. Jin, and Y. Li, “Unist: A prompt-empowered universal model for urban spatio-temporal prediction,” arXiv preprint arXiv:2402.11838, 2024.
  90. F. Jia, K. Wang, Y. Zheng, D. Cao, and Y. Liu, “Gpt4mts: Prompt-based large language model for multimodal time-series forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 21, 2024, pp. 23 343–23 351.
  91. M. Jin, S. Wang, L. Ma, Z. Chu, J. Y. Zhang, X. Shi, P.-Y. Chen, Y. Liang, Y.-F. Li, S. Pan et al., “Time-llm: Time series forecasting by reprogramming large language models,” arXiv preprint arXiv:2310.01728, 2023.
  92. D. Cao, F. Jia, S. O. Arik, T. Pfister, Y. Zheng, W. Ye, and Y. Liu, “Tempo: Prompt-based generative pre-trained transformer for time series forecasting,” arXiv preprint arXiv:2310.04948, 2023.
  93. S. Feng, H. Lyu, C. Chen, and Y.-S. Ong, “Where to move next: Zero-shot generalization of llms for next poi recommendation,” arXiv preprint arXiv:2404.01855, 2024.
  94. P. Li, M. de Rijke, H. Xue, S. Ao, Y. Song, and F. D. Salim, “Large language models for next point-of-interest recommendation,” arXiv preprint arXiv:2404.17591, 2024.
  95. Z. Guo, R. Zhang, X. Zhu, Y. Tang, X. Ma, J. Han, K. Chen, P. Gao, X. Li, H. Li et al., “Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following,” arXiv preprint arXiv:2309.00615, 2023.
  96. Y. Hong, H. Zhen, P. Chen, S. Zheng, Y. Du, Z. Chen, and C. Gan, “3d-llm: Injecting the 3d world into large language models,” Advances in Neural Information Processing Systems, vol. 36, pp. 20 482–20 494, 2023.
  97. J. Jiang, D. Pan, H. Ren, X. Jiang, C. Li, and J. Wang, “Self-supervised trajectory representation learning with temporal regularities and travel semantics,” in 2023 IEEE 39th international conference on data engineering (ICDE).   IEEE, 2023, pp. 843–855.
  98. Y. Li, W. Huang, G. Cong, H. Wang, and Z. Wang, “Urban region representation learning with openstreetmap building footprints,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023, pp. 1363–1373.
  99. H. Wan, Y. Lin, S. Guo, and Y. Lin, “Pre-training time-aware location embeddings from spatial-temporal trajectories,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 11, pp. 5510–5523, 2021.
  100. Y. Lin, H. Wan, S. Guo, and Y. Lin, “Contrastive pre-training of spatial-temporal trajectory embeddings,” arXiv preprint arXiv:2207.14539, 2022.
  101. Q. Zhang, C. Huang, L. Xia, Z. Wang, S. M. Yiu, and R. Han, “Spatial-temporal graph learning with adversarial contrastive adaptation,” in International Conference on Machine Learning.   PMLR, 2023, pp. 41 151–41 163.
  102. M. Xu, W. Dai, C. Liu, X. Gao, W. Lin, G.-J. Qi, and H. Xiong, “Spatial-temporal transformer networks for traffic flow forecasting,” arXiv preprint arXiv:2001.02908, 2020.
  103. X. Liu, Y. Liang, C. Huang, Y. Zheng, B. Hooi, and R. Zimmermann, “When do contrastive learning signals help spatio-temporal graph forecasting?” in Proceedings of the 30th International Conference on Advances in Geographic Information Systems, 2022, pp. 1–12.
  104. X. Man, C. Zhang, J. Feng, C. Li, and J. Shao, “W-mae: Pre-trained weather model with masked autoencoder for multi-variable weather forecasting,” arXiv preprint arXiv:2304.08754, 2023.
  105. K. Ranasinghe, M. Naseer, S. Khan, F. S. Khan, and M. S. Ryoo, “Self-supervised video transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 2874–2884.
  106. H. Qu, Y. Gong, M. Chen, J. Zhang, Y. Zheng, and Y. Yin, “Forecasting fine-grained urban flows via spatio-temporal contrastive self-supervision,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  107. J. Ji, J. Wang, C. Huang, J. Wu, B. Xu, Z. Wu, J. Zhang, and Y. Zheng, “Spatio-temporal self-supervised learning for traffic flow prediction,” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 4, 2023, pp. 4356–4364.
  108. Y. Wu, T. Zhang, W. Ke, S. Süsstrunk, and M. Salzmann, “Spatiotemporal self-supervised learning for point clouds in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2023, pp. 5251–5260.
  109. K. Ranasinghe and M. S. Ryoo, “Language-based action concept spaces improve video self-supervised learning,” Advances in Neural Information Processing Systems, vol. 36, pp. 74 980–74 994, 2023.
  110. J. Fu, R. Gao, Y. Yu, J. Wu, J. Li, D. Liu, and Z. Ye, “Contrastive graph learning long and short-term interests for poi recommendation,” Expert Systems with Applications, vol. 238, p. 121931, 2024.
  111. Q. Gao, J. Hong, X. Xu, P. Kuang, F. Zhou, and G. Trajcevski, “Predicting human mobility via self-supervised disentanglement learning,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  112. F. Zhou, P. Wang, X. Xu, W. Tai, and G. Trajcevski, “Contrastive trajectory learning for tour recommendation,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 1, pp. 1–25, 2021.
  113. Q. Gao, W. Wang, K. Zhang, X. Yang, C. Miao, and T. Li, “Self-supervised representation learning for trip recommendation,” Knowledge-Based Systems, vol. 247, p. 108791, 2022.
  114. S. Jiang, W. He, L. Cui, Y. Xu, and L. Liu, “Modeling long-and short-term user preferences via self-supervised learning for next poi recommendation,” ACM Transactions on Knowledge Discovery from Data, vol. 17, no. 9, pp. 1–20, 2023.
  115. F. Zhou, Y. Dai, Q. Gao, P. Wang, and T. Zhong, “Self-supervised human mobility learning for next location prediction and trajectory classification,” Knowledge-Based Systems, vol. 228, p. 107214, 2021.
  116. C. Sharma and M. Kaul, “Self-supervised few-shot learning on point clouds,” Advances in Neural Information Processing Systems, vol. 33, pp. 7212–7221, 2020.
  117. K. Ayush, B. Uzkent, C. Meng, K. Tanmay, M. Burke, D. Lobell, and S. Ermon, “Geography-aware self-supervised learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 181–10 190.
  118. K. Zhang, T. Li, S. Shen, B. Liu, J. Chen, and Q. Liu, “Adaptive graph convolutional network with attention graph clustering for co-saliency detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9050–9059.
  119. J. Pöppelbaum, G. S. Chadha, and A. Schwung, “Contrastive learning based self-supervised time-series analysis,” Applied Soft Computing, vol. 117, p. 108397, 2022.
  120. E. Seong, H. Lee, and D.-K. Chae, “Self-supervised framework based on subject-wise clustering for human subject time series data,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 20, 2024, pp. 22 341–22 349.
  121. S. Jawed, J. Grabocka, and L. Schmidt-Thieme, “Self-supervised learning for semi-supervised time series classification,” in Advances in Knowledge Discovery and Data Mining: 24th Pacific-Asia Conference, PAKDD 2020, Singapore, May 11–14, 2020, Proceedings, Part I 24.   Springer, 2020, pp. 499–511.
  122. Q. Ma, S. Li, W. Zhuang, J. Wang, and D. Zeng, “Self-supervised time series clustering with model-based dynamics,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 9, pp. 3942–3955, 2020.
  123. J. Si, Y. Xiang, J. Yang, L. Li, B. Tu, X. Chen, and R. Zhang, “A dual self-supervised deep trajectory clustering method,” in 2023 IEEE Symposium on Computers and Communications (ISCC).   IEEE, 2023, pp. 1169–1172.
  124. P. Tokmakov, M. Hebert, and C. Schmid, “Unsupervised learning of video representations via dense trajectory clustering,” in Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16.   Springer, 2020, pp. 404–421.
  125. L. Pei, Y. Cao, Y. Kang, Z. Xu, and Z. Zhao, “Self-supervised spatiotemporal clustering of vehicle emissions with graph convolutional network,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  126. J. Sauder and B. Sievers, “Self-supervised deep learning on point clouds by reconstructing space,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  127. Z. Dai, E. Getzen, and Q. Long, “Sadi: Similarity-aware diffusion model-based imputation for incomplete temporal ehr data,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2024, pp. 4195–4203.
  128. Y. Tashiro, J. Song, Y. Song, and S. Ermon, “Csdi: Conditional score-based diffusion models for probabilistic time series imputation,” Advances in Neural Information Processing Systems, vol. 34, pp. 24 804–24 816, 2021.
  129. C. Zhang, Y. Zhang, Q. Shao, B. Li, Y. Lv, X. Piao, and B. Yin, “Chattraffc: Text-to-traffic generation via diffusion model,” arXiv preprint arXiv:2311.16203, 2023.
  130. J. Hu, X. Liu, Z. Fan, Y. Liang, and R. Zimmermann, “Towards unifying diffusion models for probabilistic spatio-temporal graph learning,” arXiv preprint arXiv:2310.17360, 2023.
  131. G. Liang, P. Tiwari, S. Nowaczyk, S. Byttner, and F. Alonso-Fernandez, “Dynamic causal explanation based diffusion-variational graph neural network for spatio-temporal forecasting,” arXiv preprint arXiv:2305.09703, 2023.
  132. Z. Zhou, J. Ding, Y. Liu, D. Jin, and Y. Li, “Towards generative modeling of urban flow through knowledge-enhanced denoising diffusion,” in Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems, 2023, pp. 1–12.
  133. H. Wen, Y. Lin, Y. Xia, H. Wan, Q. Wen, R. Zimmermann, and Y. Liang, “Diffstg: Probabilistic spatio-temporal graph forecasting with denoising diffusion models,” in Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems, 2023, pp. 1–12.
  134. Y. Zhu, Y. Ye, S. Zhang, X. Zhao, and J. Yu, “Difftraj: Generating gps trajectory with diffusion probabilistic model,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  135. L. Lin, D. Shi, A. Han, and J. Gao, “Specstg: A fast spectral diffusion framework for probabilistic spatio-temporal traffic forecasting,” arXiv preprint arXiv:2401.08119, 2024.
  136. X. Ling, C. Li, F. Qin, P. Yang, and Y. Huang, “Srndiff: Short-term rainfall nowcasting with condition diffusion model,” arXiv preprint arXiv:2402.13737, 2024.
  137. T. Wei, Y. Lin, S. Guo, Y. Lin, Y. Huang, C. Xiang, Y. Bai, M. Ya, and H. Wan, “Diff-rntraj: A structure-aware diffusion model for road network-constrained trajectory generation,” arXiv preprint arXiv:2402.07369, 2024.
  138. R. Li, X. Li, S. Gao, S. B. Choy, and J. Gao, “Graph convolution recurrent denoising diffusion model for multivariate probabilistic temporal forecasting,” in International Conference on Advanced Data Mining and Applications.   Springer, 2023, pp. 661–676.
  139. C. Chu, H. Zhang, P. Wang, and F. Lu, “Simulating human mobility with a trajectory generation framework based on diffusion model,” International Journal of Geographical Information Science, pp. 1–32, 2024.
  140. Z. Hua, Y. He, C. Ma, and A. Anderson-Frey, “Weather prediction with diffusion guided by realistic forecast processes,” arXiv preprint arXiv:2402.06666, 2024.
  141. Y. Qin, H. Wu, W. Ju, X. Luo, and M. Zhang, “A diffusion model for poi recommendation,” ACM Transactions on Information Systems, vol. 42, no. 2, pp. 1–27, 2023.
  142. R. Spetlik, D. Rozumnyi, and J. Matas, “Single-image deblurring, trajectory and shape recovery of fast moving objects with denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 6857–6866.
  143. B. Chang, Y. Park, D. Park, S. Kim, and J. Kang, “Content-aware hierarchical point-of-interest embedding model for successive poi recommendation.” in IJCAI, vol. 20, 2018, pp. 3301–3307.
  144. X. Li, K. Zhao, G. Cong, C. S. Jensen, and W. Wei, “Deep representation learning for trajectory similarity computation,” in 2018 IEEE 34th international conference on data engineering (ICDE).   IEEE, 2018, pp. 617–628.
  145. Y. Zhang, A. Liu, G. Liu, Z. Li, and Q. Li, “Deep representation learning of activity trajectory similarity computation,” in 2019 IEEE International Conference on Web Services (ICWS).   IEEE, 2019, pp. 312–319.
  146. A. Liu, Y. Zhang, X. Zhang, G. Liu, Y. Zhang, Z. Li, L. Zhao, Q. Li, and X. Zhou, “Representation learning with multi-level attention for activity trajectory similarity computation,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 5, pp. 2387–2400, 2020.
  147. S. Taghizadeh, A. Elekes, M. Schäler, and K. Böhm, “How meaningful are similarities in deep trajectory representations?” Information Systems, vol. 98, p. 101452, 2021.
  148. D. Yao, H. Hu, L. Du, G. Cong, S. Han, and J. Bi, “Trajgat: A graph-based long-term dependency modeling approach for trajectory similarity computation,” in Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, 2022, pp. 2275–2285.
  149. C. Liu, H. Yang, J. Fu, and X. Qian, “Learning trajectory-aware transformer for video super-resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5687–5696.
  150. T.-Y. Fu and W.-C. Lee, “Trembr: Exploring road networks for trajectory representation learning,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 11, no. 1, pp. 1–25, 2020.
  151. X. Wu, C. Huang, C. Zhang, and N. V. Chawla, “Hierarchically structured transformer networks for fine-grained spatial event forecasting,” in WWW, 2020, pp. 2320–2330.
  152. J. Jiang, C. Han, W. X. Zhao, and J. Wang, “Pdformer: Propagation delay-aware dynamic long-range transformer for traffic flow prediction,” arXiv preprint arXiv:2301.07945, 2023.
  153. D. Jin, J. Shi, R. Wang, Y. Li, Y. Huang, and Y.-B. Yang, “Trafformer: Unify time and space in traffic prediction,” in International Conference on Artificial Intelligence (AAAI), vol. 37, no. 7, 2023, pp. 8114–8122.
  154. S. Chen, G. Long, T. Shen, and J. Jiang, “Prompt federated learning for weather forecasting: Toward foundation models on meteorological data,” arXiv preprint arXiv:2301.09152, 2023.
  155. Z. Liu, D. Ostrenga, B. Vollmer, B. Deshong, K. Macritchie, M. Greene, and S. Kempler, “Global precipitation measurement mission products and services at the nasa ges disc,” Bulletin of the American Meteorological Society, vol. 98, no. 3, pp. 437–444, 2017.
  156. Q. Wu, M. Li, J. Shen, L. Lü, B. Du, and K. Zhang, “Transformerlight: A novel sequence modeling based traffic signaling mechanism via gated transformer,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023, pp. 2639–2647.
  157. M. Beladev, G. Katz, L. Rokach, U. Singer, and K. Radinsky, “Graphert–transformers-based temporal dynamic graph embedding,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 68–77.
  158. S. Yang, J. Liu, and K. Zhao, “Getnext: trajectory flow map enhanced transformer for next poi recommendation,” in Proceedings of the 45th International ACM SIGIR Conference on research and development in information retrieval, 2022, pp. 1144–1153.
  159. Y. Wu, G. Zhao, M. Li, Z. Zhang, and X. Qian, “Reason generation for point of interest recommendation via a hierarchical attention-based transformer model,” IEEE Transactions on Multimedia, 2023.
  160. S. Jiang and J. Wu, “Temporal-geographical attention-based transformer for point-of-interest recommendation,” Journal of Intelligent & Fuzzy Systems, no. Preprint, pp. 1–11, 2023.
  161. S. Halder, K. H. Lim, J. Chan, and X. Zhang, “Capacity-aware fair poi recommendation combining transformer neural networks and resource allocation policy,” Applied Soft Computing, vol. 147, p. 110720, 2023.
  162. Y. Qin, Y. Fang, H. Luo, F. Zhao, and C. Wang, “Next point-of-interest recommendation with auto-correlation enhanced multi-modal transformer network,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2612–2616.
  163. S. Halder, K. H. Lim, J. Chan, and X. Zhang, “Transformer-based multi-task learning for queuing time aware next poi recommendation,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining.   Springer, 2021, pp. 510–523.
  164. Z. Fang, Y. Du, L. Chen, Y. Hu, Y. Gao, and G. Chen, “E 2 dtc: An end to end deep trajectory clustering framework via self-training,” in 2021 IEEE 37th International Conference on Data Engineering (ICDE).   IEEE, 2021, pp. 696–707.
  165. C. Wang, F. Lyu, S. Wu, Y. Wang, L. Xu, F. Zhang, S. Wang, Y. Wang, and Z. Du, “A deep trajectory clustering method based on sequence-to-sequence autoencoder model,” Transactions in GIS, vol. 26, no. 4, pp. 1801–1820, 2022.
  166. H. S. Kim, J. H. Kang, H. W. Moon, and J. G. Lee, “Anomalous trajectory detection based on seq2seq auto-encoder,” other journals, vol. 28, no. 1, pp. 35–40, 2020.
  167. X. Wang, H. Zhang, Y. Zhang, M. Wang, J. Song, T. Lai, and M. Khushi, “Learning nonstationary time-series with dynamic pattern extractions,” IEEE Transactions on Artificial Intelligence, vol. 3, no. 5, pp. 778–787, 2021.
  168. M. Hashemi and H. A. Karimi, “A critical review of real-time map-matching algorithms: Current issues and future directions,” Computers, Environment and Urban Systems, vol. 48, pp. 153–165, 2014.
  169. S. Brakatsoulas, D. Pfoser, R. Salas, and C. Wenk, “On map-matching vehicle tracking data,” in Proceedings of the 31st international conference on Very large data bases, 2005, pp. 853–864.
  170. M. A. Quddus, W. Y. Ochieng, and R. B. Noland, “Current map-matching algorithms for transport applications: State-of-the art and future research directions,” Transportation research part c: Emerging technologies, vol. 15, no. 5, pp. 312–328, 2007.
  171. P. Newson and J. Krumm, “Hidden markov map matching through noise and sparseness,” in Proceedings of the 17th ACM SIGSPATIAL international conference on advances in geographic information systems, 2009, pp. 336–343.
  172. C. Yang, M. Sun, W. X. Zhao, Z. Liu, and E. Y. Chang, “A neural network approach to jointly modeling social networks and mobile trajectories,” ACM Transactions on Information Systems (TOIS), vol. 35, no. 4, pp. 1–28, 2017.
  173. S. Li, W. Chen, B. Yan, Z. Li, S. Zhu, and Y. Yu, “Self-supervised contrastive representation learning for large-scale trajectories,” Future Generation Computer Systems, vol. 148, pp. 357–366, 2023.
  174. H. Wang and Z. Li, “Region representation learning via mobility flow,” in International Conference on Information and Knowledge Management (CIKM), 2017, pp. 237–246.
  175. Z. Yao, Y. Fu, B. Liu, W. Hu, and H. Xiong, “Representing urban functions through zone embedding with human mobility patterns,” in International Joint Conferences on Artificial Intelligence (IJCAI), 2018.
  176. Y. Fu, P. Wang, J. Du, L. Wu, and X. Li, “Efficient region embedding with multi-view spatial networks: A perspective of locality-constrained spatial autocorrelations,” in AAAI, vol. 33, no. 01, 2019, pp. 906–913.
  177. Y. Zhang, Y. Fu, P. Wang, X. Li, and Y. Zheng, “Unifying inter-region autocorrelation and intra-region structures for spatial embedding via collective adversarial learning,” in International Conference on Knowledge Discovery & Data Mining (KDD), 2019, pp. 1700–1708.
  178. M. Zhang, T. Li, Y. Li, and P. Hui, “Multi-view joint graph representation learning for urban region embedding,” in International Joint Conferences on Artificial Intelligence (IJCAI), 2021, pp. 4431–4437.
  179. S. Wu, X. Yan, X. Fan, S. Pan, S. Zhu, C. Zheng, M. Cheng, and C. Wang, “Multi-graph fusion networks for urban region embedding,” arXiv preprint arXiv:2201.09760, 2022.
  180. Q. Zhang, X. Ren, L. Xia, S. M. Yiu, and C. Huang, “Spatio-temporal graph learning with large language model,” 2023.
  181. R. Feng, Y. Gao, T. H. E. Tse, X. Ma, and H. J. Chang, “Diffpose: Spatiotemporal diffusion model for video-based human pose estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14 861–14 872.
  182. L. Liu, J. Zhen, G. Li, G. Zhan, Z. He, B. Du, and L. Lin, “Dynamic spatial-temporal representation learning for traffic flow prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 11, pp. 7169–7183, 2020.
  183. J. Wang, J. Jiang, W. Jiang, C. Li, and W. X. Zhao, “Libcity: An open library for traffic prediction,” in Proceedings of the 29th international conference on advances in geographic information systems, 2021, pp. 145–148.
  184. Z. Pan, Y. Liang, W. Wang, Y. Yu, Y. Zheng, and J. Zhang, “Urban traffic prediction from spatio-temporal data using deep meta learning,” in International Conference on Knowledge Discovery and Data Mining (KDD), 2019, pp. 1720–1730.
  185. C. Song, Y. Lin, S. Guo, and H. Wan, “Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 01, 2020, pp. 914–921.
  186. Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” arXiv preprint arXiv:1707.01926, 2017.
  187. X. Yi, J. Zhang, Z. Wang, T. Li, and Y. Zheng, “Deep distributed fusion network for air quality prediction,” in International Conference on Knowledge Discovery and Data Mining (KDD), 2018, pp. 965–973.
  188. S. Hasan and S. V. Ukkusuri, “Location contexts of user check-ins to model urban geo life-style patterns,” PloS one, vol. 10, no. 5, p. e0124819, 2015.
  189. S. Rasp, P. D. Dueben, S. Scher, J. A. Weyn, S. Mouatadid, and N. Thuerey, “Weatherbench: a benchmark data set for data-driven weather forecasting,” Journal of Advances in Modeling Earth Systems, vol. 12, no. 11, p. e2020MS002203, 2020.
  190. G. Zheng, Y. Xiong, X. Zang, J. Feng, H. Wei, H. Zhang, Y. Li, K. Xu, and Z. Li, “Learning phase competition for traffic signal control,” in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 1963–1972.
  191. D. Yang, D. Zhang, V. W. Zheng, and Z. Yu, “Modeling user activity preference by leveraging user spatial temporal characteristics in lbsns,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 45, no. 1, pp. 129–142, 2014.
  192. L. Chen, L. Zhang, S. Cao, Z. Wu, and J. Cao, “Personalized itinerary recommendation: Deep and collaborative learning with textual information,” Expert Systems with Applications, vol. 144, p. 113070, 2020.
  193. S. Guo, Y. Lin, H. Wan, X. Li, and G. Cong, “Learning dynamics and heterogeneity of spatial-temporal graph data for traffic forecasting,” TKDE, vol. 34, no. 11, pp. 5415–5428, 2021.
  194. L. Han, B. Du, L. Sun, Y. Fu, Y. Lv, and H. Xiong, “Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting,” in SIGKDD, 2021, pp. 547–555.
  195. S. Lan, Y. Ma, W. Huang, W. Wang, H. Yang, and P. Li, “Dstagnn: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting,” in International Conference on Machine Learning (ICML).   PMLR, 2022, pp. 11 906–11 917.
  196. X. Rao, H. Wang, L. Zhang, J. Li, S. Shang, and P. Han, “Fogs: First-order gradient supervision with learning-based graph for traffic flow forecasting,” in International Joint Conferences on Artificial Intelligence (IJCAI).   ijcai. org, 2022.
  197. J. Choi, H. Choi, J. Hwang, and N. Park, “Graph neural controlled differential equations for traffic forecasting,” in International Conference on Artificial Intelligence (AAAI), vol. 36, no. 6, 2022, pp. 6367–6374.
  198. L. Xia, C. Huang, Y. Xu, P. Dai, L. Bo, X. Zhang, and T. Chen, “Spatial-temporal sequential hypergraph network for crime prediction with dynamic multiplex relation learning,” arXiv preprint arXiv:2201.02435, 2022.
  199. S. Guo, Y. Lin, N. Feng, C. Song, and H. Wan, “Attention based spatial-temporal graph convolutional networks for traffic flow forecasting,” in International Conference on Artificial Intelligence (AAAI).   International Conference on Artificial Intelligence (AAAI), pp. 922–929.
  200. S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv:1803.01271, 2018.
  201. S. Shleifer, C. McCreery, and V. Chitters, “Incrementally improving graph wavenet performance on traffic prediction,” arXiv preprint arXiv:1912.07390, 2019.
  202. X. Han and S. Gong, “Lst-gcn: Long short-term memory embedded graph convolution network for traffic flow forecasting,” Electronics, vol. 11, no. 14, p. 2230, 2022.
  203. S. Huang, D. Wang, X. Wu, and A. Tang, “Dsanet: Dual self-attention network for multivariate time series forecasting,” in International Conference on Information and Knowledge Management (CIKM), 2019, pp. 2129–2132.
  204. J. Zhang, Y. Zheng, and D. Qi, “Deep spatio-temporal residual networks for citywide crowd flows prediction,” in International Conference on Artificial Intelligence (AAAI), 2017.
  205. Z. Shao, Z. Zhang, F. Wang, and Y. Xu, “Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting,” in International Conference on Knowledge Discovery and Data Mining (KDD), 2022, pp. 1567–1577.
  206. Y. Chen, I. Segovia, and Y. R. Gel, “Z-gcnets: time zigzags at graph convolutional networks for time series forecasting,” in International Conference on Machine Learning (ICML).   PMLR, 2021, pp. 1684–1694.
  207. Y. Chen, I. Segovia-Dominguez et al., “Tamp-s2gcnets: coupling time-aware multipersistence knowledge representation with spatio-supra graph convolutional networks for time-series forecasting,” in ICLR, 2021.
  208. J. Zhu, Q. Wang, C. Tao, H. Deng et al., “Ast-gcn: Attribute-augmented spatiotemporal graph convolutional network for traffic forecasting,” IEEE Access, vol. 9, pp. 35 973–35 983.
  209. R. Lam, A. Sanchez-Gonzalez, M. Willson, P. Wirnsberger, M. Fortunato, F. Alet, S. Ravuri, T. Ewalds, Z. Eaton-Rosen, W. Hu et al., “Learning skillful medium-range global weather forecasting,” Science, vol. 382, no. 6677, pp. 1416–1421, 2023.
  210. H. Wu, W. Xion, F. Xu, X. Luo, C. Chen, X.-S. Hua, and H. Wang, “Pastnet: Introducing physical inductive biases for spatio-temporal video prediction,” arXiv preprint arXiv:2305.11421, 2023.
  211. G. Li, Q. Chen, B. Zheng, H. Yin, Q. V. H. Nguyen, and X. Zhou, “Group-based recurrent neural networks for poi recommendation,” ACM Transactions on Data Science, vol. 1, no. 1, pp. 1–18, 2020.
  212. L. Huang, Y. Ma, S. Wang, and Y. Liu, “An attention-based spatiotemporal lstm network for next poi recommendation,” IEEE Transactions on Services Computing, vol. 14, no. 6, pp. 1585–1597, 2019.
  213. Y. Wu, K. Li, G. Zhao, and X. Qian, “Personalized long-and short-term preference learning for next poi recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 4, pp. 1944–1957, 2020.
  214. D. Wang, C. Chen, C. Di, and M. Shu, “Exploring behavior patterns for next-poi recommendation via graph self-supervised learning,” Electronics, vol. 12, no. 8, p. 1939, 2023.
  215. B. Choe, T. Kang, and K. Jung, “Recommendation system with hierarchical recurrent neural network for long-term time series,” IEEE Access, vol. 9, pp. 72 033–72 039, 2021.
  216. P. Bedi, P. Khurana, and R. Sharma, “Session based recommendations using cnn-lstm with fuzzy time series,” in International conference on artificial intelligence and speech technology.   Springer, 2021, pp. 432–446.
  217. X. Zhou, Y. Li, and W. Liang, “Cnn-rnn based intelligent recommendation for online medical pre-diagnosis support,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 18, no. 3, pp. 912–921, 2020.
  218. J. Wang, X. Li, J. Li, Q. Sun, and H. Wang, “Ngcu: A new rnn model for time-series data prediction,” Big Data Research, vol. 27, p. 100296, 2022.
  219. J. Gomide, A. Veloso, W. Meira Jr, V. Almeida, F. Benevenuto, F. Ferraz, and M. Teixeira, “Dengue surveillance based on a computational model of spatio-temporal locality of twitter,” in Proceedings of the 3rd international web science conference, 2011, pp. 1–8.
  220. M. Kulldorff, R. Heffernan, J. Hartman, R. Assunçao, and F. Mostashari, “A space–time permutation scan statistic for disease outbreak detection,” PLoS medicine, vol. 2, no. 3, p. e59, 2005.
  221. N. Mrabah, N. M. Khan, R. Ksantini, and Z. Lachiri, “Deep clustering with a dynamic autoencoder,” arXiv preprint arXiv:1901.07752, 2019.
  222. J.-G. Lee, J. Han, and K.-Y. Whang, “Trajectory clustering: a partition-and-group framework,” in Proceedings of the 2007 ACM SIGMOD international conference on Management of data, 2007, pp. 593–604.
  223. S. K. Kumaran, D. P. Dogra, P. P. Roy, and A. Mitra, “Video trajectory classification and anomaly detection using hybrid cnn-vae,” arXiv preprint arXiv:1812.07203, 2018.
  224. G. Richard, B. Grossin, G. Germaine, G. Hébrail, and A. de Moliner, “Autoencoder-based time series clustering with energy applications,” arXiv preprint arXiv:2002.03624, 2020.
  225. T. Thinsungnoen, K. Kerdprasop, and N. Kerdprasop, “Deep autoencoder networks optimized with genetic algorithms for efficient ecg clustering,” Int. J. Mach. Learn. Comput, vol. 8, no. 2, pp. 112–116, 2018.
  226. N. Tavakoli, S. Siami-Namini, M. Adl Khanghah, F. Mirza Soltani, and A. Siami Namin, “An autoencoder-based deep learning approach for clustering time series data,” SN Applied Sciences, vol. 2, pp. 1–25, 2020.
  227. G. Li, B. Choi, J. Xu, S. S. Bhowmick, D. N.-y. Mah, and G. L.-H. Wong, “Autoshape: An autoencoder-shapelet approach for time series clustering,” arXiv preprint arXiv:2208.04313, 2022.
  228. Y. Li, Q. Xu, W. Li, and J. Nie, “Automatic clustering-based two-branch cnn for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 9, pp. 7803–7816, 2020.
  229. X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: Spatial cnn for traffic scene understanding,” in International Conference on Artificial Intelligence (AAAI), vol. 32, no. 1, 2018.
  230. J. Zhang, C.-G. Li, C. You, X. Qi, H. Zhang, J. Guo, and Z. Lin, “Self-supervised convolutional subspace clustering network,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5473–5482.
  231. C. Chen, D. Zhang, P. S. Castro, N. Li, L. Sun, S. Li, and Z. Wang, “iboat: Isolation-based online anomalous trajectory detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 2, pp. 806–818, 2013.
  232. L. Song, R. Wang, D. Xiao, X. Han, Y. Cai, and C. Shi, “Anomalous trajectory detection using recurrent neural network,” in International Conference on Advanced Data Mining and Applications.   Springer, 2018, pp. 263–277.
  233. A. Nanduri and L. Sherry, “Anomaly detection in aircraft data using recurrent neural networks (rnn),” in 2016 Integrated Communications Navigation and Surveillance (ICNS).   Ieee, 2016, pp. 5C2–1.
  234. C. Qi and Y. Liu, “Chinese character relationship extraction based on bert and cnn-bigru-att,” in 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS).   IEEE, 2021, pp. 522–528.
  235. D. Zhang, Z. Chang, S. Wu, Y. Yuan, K.-L. Tan, and G. Chen, “Continuous trajectory similarity search for online outlier detection,” IEEE Transactions on Knowledge and Data Engineering, 2020.
Citations (6)

Summary

We haven't generated a summary for this paper yet.