Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CrimeGAT: Leveraging Graph Attention Networks for Enhanced Predictive Policing in Criminal Networks (2311.18641v1)

Published 30 Nov 2023 in cs.SI

Abstract: In this paper, we present CrimeGAT, a novel application of Graph Attention Networks (GATs) for predictive policing in criminal networks. Criminal networks pose unique challenges for predictive analytics due to their complex structure, multi-relational links, and dynamic behavior. Traditional methods often fail to capture these complexities, leading to suboptimal predictions. To address these challenges, we propose the use of GATs, which can effectively leverage both node features and graph structure to make predictions. Our proposed CrimeGAT model integrates attention mechanisms to weigh the importance of a node's neighbors, thereby capturing the local and global structures of criminal networks. We formulate the problem as learning a function that maps node features and graph structure to a prediction of future criminal activity. The experimental results on real-world datasets demonstrate that CrimeGAT out-performs conventional methods in predicting criminal activities, thereby providing a powerful tool for law enforcement agencies to proactively deploy resources. Furthermore, the interpretable nature of the attentionmechanism inGATs offers insights into the key players and relationships in criminal networks. This research opens new avenues for applying deep learning techniques in the Aeld of predictive policing and criminal network analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. K. Basu and A. Sen, “Identifying individuals associated with organized criminal networks: a social network analysis,” Social Networks, vol. 64, pp. 42–54, 2021.
  2. P. Zhou, Y. Liu, M. Zhao, and X. Lou, “Criminal network analysis with interactive strategies: A proof of concept study using mobile call logs.” in SEKE, 2016, pp. 261–266.
  3. J. Xu and H. Chen, “Criminal network analysis and visualization,” Communications of the ACM, vol. 48, no. 6, pp. 100–107, 2005.
  4. P. Zhou, Y. Liu, M. Zhao, and X. Lou, “A proof of concept study for criminal network analysis with interactive strategies,” International Journal of Software Engineering and Knowledge Engineering, vol. 27, no. 04, pp. 623–639, 2017.
  5. D. M. Schwartz and T. Rouselle, “Using social network analysis to target criminal networks,” Trends in Organized Crime, vol. 12, no. 2, pp. 188–207, 2009.
  6. H. Liu, P. Zhou, and Y. Tang, “Customizing clothing retrieval based on semantic attributes and learned features,” 2016.
  7. J. Zhao, Y. Liu, and P. Zhou, “Framing a sustainable architecture for data analytics systems: An exploratory study,” IEEE Access, vol. 6, pp. 61 600–61 613, 2018.
  8. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI open, vol. 1, pp. 57–81, 2020.
  9. P. Zhou et al., “Lageo: a latent and geometrical framework for path and manipulation planning,” 2022.
  10. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  11. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
  12. C. Yang, “Transcrimenet: A transformer-based model for text-based crime prediction in criminal networks,” arXiv preprint arXiv:2311.09529, 2023.
  13. ——, “Crimegnn: Harnessing the power of graph neural networks for community detection in criminal networks,” 2023.
  14. M. Zhao, Y. Liu, and P. Zhou, “Towards a systematic approach to graph data modeling: Scenario-based design and experiences.” in SEKE, 2016, pp. 634–637.
  15. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  16. S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks?” arXiv preprint arXiv:2105.14491, 2021.
  17. X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in The world wide web conference, 2019, pp. 2022–2032.
  18. Y. Ye and S. Ji, “Sparse graph attention networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 1, pp. 905–916, 2021.
  19. X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua, “Kgat: Knowledge graph attention network for recommendation,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 950–958.
  20. Y. Rong, W. Huang, T. Xu, and J. Huang, “Dropedge: Towards deep graph convolutional networks on node classification,” arXiv preprint arXiv:1907.10903, 2019.
  21. C. F. Mountain and C. M. Dresler, “Regional lymph node classification for lung cancer staging,” Chest, vol. 111, no. 6, pp. 1718–1723, 1997.
  22. D. Li, K. D. Wong, Y. H. Hu, and A. M. Sayeed, “Detection, classification, and tracking of targets,” IEEE signal processing magazine, vol. 19, no. 2, pp. 17–29, 2002.
  23. W.-Y. Loh, “Classification and regression trees,” Wiley interdisciplinary reviews: data mining and knowledge discovery, vol. 1, no. 1, pp. 14–23, 2011.
  24. J. B. Lee, R. Rossi, and X. Kong, “Graph classification using structural attention,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1666–1674.
  25. F. Errica, M. Podda, D. Bacciu, and A. Micheli, “A fair comparison of graph neural networks for graph classification,” arXiv preprint arXiv:1912.09893, 2019.
  26. C. Yang, P. Zhou, and J. Qi, “Integrating visual foundation models for enhanced robot manipulation and motion planning: A layered approach,” arXiv preprint arXiv:2309.11244, 2023.

Summary

We haven't generated a summary for this paper yet.