Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Neighborhood Selection for Unsupervised Graph Anomaly Detection (2312.05526v1)

Published 9 Dec 2023 in cs.LG and cs.AI

Abstract: Unsupervised graph anomaly detection is crucial for various practical applications as it aims to identify anomalies in a graph that exhibit rare patterns deviating significantly from the majority of nodes. Recent advancements have utilized Graph Neural Networks (GNNs) to learn high-quality node representations for anomaly detection by aggregating information from neighborhoods. However, the presence of anomalies may render the observed neighborhood unreliable and result in misleading information aggregation for node representation learning. Selecting the proper neighborhood is critical for graph anomaly detection but also challenging due to the absence of anomaly-oriented guidance and the interdependence with representation learning. To address these issues, we utilize the advantages of reinforcement learning in adaptively learning in complex environments and propose a novel method that incorporates Reinforcement neighborhood selection for unsupervised graph ANomaly Detection (RAND). RAND begins by enriching the candidate neighbor pool of the given central node with multiple types of indirect neighbors. Next, RAND designs a tailored reinforcement anomaly evaluation module to assess the reliability and reward of considering the given neighbor. Finally, RAND selects the most reliable subset of neighbors based on these rewards and introduces an anomaly-aware aggregator to amplify messages from reliable neighbors while diminishing messages from unreliable ones. Extensive experiments on both three synthetic and two real-world datasets demonstrate that RAND outperforms the state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. K. Ding, J. Li, R. Bhanushali, and H. Liu, “Deep anomaly detection on attributed networks,” in Proceedings of the 2019 SIAM International Conference on Data Mining.   SIAM, 2019, pp. 594–602.
  2. F. Liu, X. Ma, J. Wu, J. Yang, S. Xue, A. Beheshti, C. Zhou, H. Peng, Q. Z. Sheng, and C. C. Aggarwal, “Dagad: Data augmentation for graph anomaly detection,” in 2022 IEEE International Conference on Data Mining (ICDM), 2022, pp. 259–268.
  3. D. Wang, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, Q. Yu, J. Zhou, S. Yang, and Y. Qi, “A semi-supervised graph attentive network for financial fraud detection,” in 2019 IEEE International Conference on Data Mining (ICDM).   IEEE, 2019, pp. 598–607.
  4. T. Sun, Z. Qian, S. Dong, P. Li, and Q. Zhu, “Rumor detection on social media with graph adversarial contrastive learning,” in Proceedings of the ACM Web Conference, 2022, pp. 2789–2797.
  5. X. Zhou, W. Liang, W. Li, K. Yan, S. Shimizu, I. Kevin, and K. Wang, “Hierarchical adversarial attacks against graph-neural-network-based iot network intrusion detection system,” IEEE Internet of Things Journal, vol. 9, no. 12, pp. 9310–9319, 2021.
  6. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  7. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  8. X. Ma, J. Wu, S. Xue, J. Yang, C. Zhou, Q. Z. Sheng, H. Xiong, and L. Akoglu, “A comprehensive survey on graph anomaly detection with deep learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  9. M. McPherson, L. Smith-Lovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,” Annual review of sociology, pp. 415–444, 2001.
  10. Y. Dou, Z. Liu, L. Sun, Y. Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraudsters,” in CIKM, 2020, pp. 315–324.
  11. K. Liu, Y. Dou, Y. Zhao, X. Ding, X. Hu, R. Zhang, K. Ding, C. Chen, H. Peng, K. Shu et al., “Bond: Benchmarking unsupervised outlier node detection on static attributed graphs,” in NIPS Datasets and Benchmarks Track, 2022.
  12. X. Xu, N. Yuruk, Z. Feng, and T. A. Schweiger, “Scan: a structural clustering algorithm for networks,” in KDD, 2007, pp. 824–833.
  13. M. Sakurada and T. Yairi, “Anomaly detection using autoencoders with nonlinear dimensionality reduction,” in MLSDA 2nd workshop on machine learning for sensory data analysis, 2014, pp. 4–11.
  14. Z. Chen, B. Liu, M. Wang, P. Dai, J. Lv, and L. Bo, “Generative adversarial attributed network anomaly detection,” in CIKM, 2020, pp. 1989–1992.
  15. Z. Peng, M. Luo, J. Li, L. Xue, and Q. Zheng, “A deep multi-view framework for anomaly detection on attributed networks,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  16. S. Zhou, Q. Tan, Z. Xu, X. Huang, and F.-l. Chung, “Subtractive aggregation for attributed network anomaly detection,” in CIKM, 2021, pp. 3672–3676.
  17. T. N. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308, 2016.
  18. H. Fan, F. Zhang, and Z. Li, “Anomalydae: Dual autoencoder for anomaly detection on attributed networks,” in ICASSP.   IEEE, 2020, pp. 5685–5689.
  19. X. Luo, J. Wu, A. Beheshti, J. Yang, X. Zhang, Y. Wang, and S. Xue, “Comga: Community-aware attributed graph anomaly detection,” in WSDM, 2022, pp. 657–665.
  20. Y. Liu, Z. Li, S. Pan, C. Gong, C. Zhou, and G. Karypis, “Anomaly detection on attributed networks via contrastive self-supervised learning,” IEEE transactions on neural networks and learning systems, 2021.
  21. M. Jin, Y. Liu, Y. Zheng, L. Chi, Y.-F. Li, and S. Pan, “Anemone: Graph anomaly detection with multi-scale contrastive learning,” in CIKM, 2021, pp. 3122–3126.
  22. Y. Zheng, M. Jin, Y. Liu, L. Chi, K. T. Phan, and Y.-P. P. Chen, “Generative and contrastive self-supervised learning for graph anomaly detection,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  23. J. Zhang, S. Wang, and S. Chen, “Reconstruction enhanced multi-view contrastive learning for anomaly detection on attributed networks.” in IJCAI, 2022, pp. 2376–2382.
  24. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in ICLR, 2018.
  25. S. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Harutyunyan, G. Ver Steeg, and A. Galstyan, “Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing,” in ICML.   PMLR, 2019, pp. 21–29.
  26. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, “Beyond homophily in graph neural networks: Current limitations and effective designs,” Advances in Neural Information Processing Systems, vol. 33, pp. 7793–7804, 2020.
  27. D. Lim, F. Hohne, X. Li, S. L. Huang, V. Gupta, O. Bhalerao, and S. N. Lim, “Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods,” Advances in Neural Information Processing Systems, vol. 34, pp. 20 887–20 902, 2021.
  28. X. Li, R. Zhu, Y. Cheng, C. Shan, S. Luo, D. Li, and W. Qian, “Finding global homophily in graph neural networks when meeting heterophily,” in ICML.   PMLR, 2022, pp. 13 242–13 256.
  29. F. Shi, Y. Cao, Y. Shang, Y. Zhou, C. Zhou, and J. Wu, “H2-fdetector: a gnn-based fraud detector with homophilic and heterophilic connections,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 1486–1494.
  30. J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238–1274, 2013.
  31. M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, J. Pérolat, D. Silver, and T. Graepel, “A unified game-theoretic approach to multiagent reinforcement learning,” Advances in neural information processing systems, vol. 30, 2017.
  32. M. Nie, D. Chen, and D. Wang, “Reinforcement learning on graphs: A survey,” IEEE Transactions on Emerging Topics in Computational Intelligence, 2023.
  33. K.-H. Lai, D. Zha, K. Zhou, and X. Hu, “Policy-gnn: Aggregation optimization for graph neural networks,” in KDD, 2020, pp. 461–471.
  34. Z. ZHANG, Q. Liu, Q. Hu, and C.-K. Lee, “Hierarchical graph transformer with adaptive node sampling,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35, 2022, pp. 21 171–21 183.
  35. E. Dai, W. Jin, H. Liu, and S. Wang, “Towards robust graph neural networks for noisy graphs with sparse labels,” in WSDM, 2022, pp. 181–191.
  36. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM journal on computing, vol. 32, no. 1, pp. 48–77, 2002.
  37. H. Wang, Z. Wei, J. Gan, S. Wang, and Z. Huang, “Personalized pagerank to a target node, revisited,” in KDD, 2020, pp. 657–667.
  38. L. Akoglu, H. Tong, and D. Koutra, “Graph based anomaly detection and description: a survey,” Data mining and knowledge discovery, vol. 29, no. 3, pp. 626–688, 2015.
  39. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  40. L. Tang and H. Liu, “Relational learning via latent social dimensions,” in KDD, 2009, pp. 817–826.
  41. T. Zhao, C. Deng, K. Yu, T. Jiang, D. Wang, and M. Jiang, “Error-bounded graph anomaly loss for gnns,” in CIKM, 2020, pp. 1873–1882.
  42. S. Kumar, X. Zhang, and J. Leskovec, “Predicting dynamic embedding trajectory in temporal interaction networks,” in KDD, 2019, pp. 1269–1278.
  43. X. Song, M. Wu, C. Jermaine, and S. Ranka, “Conditional anomaly detection,” IEEE Transactions on knowledge and Data Engineering, vol. 19, no. 5, pp. 631–645, 2007.
  44. J. Tang, J. Li, Z. Gao, and J. Li, “Rethinking graph neural networks for anomaly detection,” in ICML, 2022.
  45. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuanchen Bei (23 papers)
  2. Sheng Zhou (186 papers)
  3. Qiaoyu Tan (36 papers)
  4. Hao Xu (351 papers)
  5. Hao Chen (1006 papers)
  6. Zhao Li (109 papers)
  7. Jiajun Bu (52 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.