Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SENet: Visual Detection of Online Social Engineering Attack Campaigns (2401.05569v1)

Published 10 Jan 2024 in cs.CR and cs.LG

Abstract: Social engineering (SE) aims at deceiving users into performing actions that may compromise their security and privacy. These threats exploit weaknesses in human's decision making processes by using tactics such as pretext, baiting, impersonation, etc. On the web, SE attacks include attack classes such as scareware, tech support scams, survey scams, sweepstakes, etc., which can result in sensitive data leaks, malware infections, and monetary loss. For instance, US consumers lose billions of dollars annually due to various SE attacks. Unfortunately, generic social engineering attacks remain understudied, compared to other important threats, such as software vulnerabilities and exploitation, network intrusions, malicious software, and phishing. The few existing technical studies that focus on social engineering are limited in scope and mostly focus on measurements rather than developing a generic defense. To fill this gap, we present SEShield, a framework for in-browser detection of social engineering attacks. SEShield consists of three main components: (i) a custom security crawler, called SECrawler, that is dedicated to scouting the web to collect examples of in-the-wild SE attacks; (ii) SENet, a deep learning-based image classifier trained on data collected by SECrawler that aims to detect the often glaring visual traits of SE attack pages; and (iii) SEGuard, a proof-of-concept extension that embeds SENet into the web browser and enables real-time SE attack detection. We perform an extensive evaluation of our system and show that SENet is able to detect new instances of SE attacks with a detection rate of up to 99.6% at 1% false positive, thus providing an effective first defense against SE attacks on the web.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. K. Krombholz, H. Hobel, M. Huber, and E. Weippl, “Advanced social engineering attacks,” Journal of Information Security and Applications, vol. 22, pp. 113–122, 2015, special Issue on Security of Information and Networks. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2214212614001343
  2. B. Stone-Gross, R. Abman, R. A. Kemmerer, C. Kruegel, D. G. Steigerwald, and G. Vigna, “The underground economy of fake antivirus software,” in Economics of Information Security and Privacy III, B. Schneier, Ed.   New York, NY: Springer New York, 2013, pp. 55–78.
  3. N. Miramirkhani, O. Starov, and N. Nikiforakis, “Dial One for Scam: A Large-Scale Analysis of Technical Support Scams,” in Proceedings of the 24th Network and Distributed System Security Symposium (NDSS), 2017.
  4. A. Kharraz, W. K. Robertson, and E. Kirda, “Surveylance: Automatically detecting online survey scams,” in 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA, 2018, pp. 70–86. [Online]. Available: https://doi.org/10.1109/SP.2018.00044
  5. J. W. Clark and D. McCoy, “There are no free iPads: An analysis of survey scams as a business,” in 6th USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET 13), 2013.
  6. “New data shows ftc received 2.2 million fraud reports from consumers in 2020,” https://www.ftc.gov/news-events/press-releases/2021/02/new-data-shows-ftc-received-2-2-million-fraud-reports-consumers.
  7. M. Khonji, Y. Iraqi, and A. Jones, “Phishing detection: A literature survey,” IEEE Communications Surveys Tutorials, vol. 15, no. 4, pp. 2091–2121, Fourth 2013.
  8. P. Vadrevu and Roberto Perdisci, “What you see is NOT what you get: Discovering and tracking social engineering ad campaigns,” in Proceedings of the ACM Internet Measurement Conference, ser. IMC, 2019.
  9. K. Subramani, X. Yuan, O. Setayeshfar, P. Vadrevu, K. H. Lee, and R. Perdisci, “When push comes to ads: Measuring the rise of (malicious) push advertising,” ser. IMC ’20, 2020, p. 724–737.
  10. J. Hong, “The state of phishing attacks,” Commun. ACM, vol. 55, no. 1, p. 74–81, Jan. 2012. [Online]. Available: https://doi.org/10.1145/2063176.2063197
  11. “Google safe browsing,” https://safebrowsing.google.com/.
  12. Z. Yang, J. Allen, M. Landen, R. Perdisci, and W. Lee, “TRIDENT: Towards detecting and mitigating web-based social engineering attacks,” in USENIX Security Symposium.   USENIX, 2023.
  13. S. Abdelnabi, K. Krombholz, and M. Fritz, “Visualphishnet: Zero-day phishing website detection by visual similarity,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’20, 2020, p. 1681–1698.
  14. Y. Lin, R. Liu, D. M. Divakaran, J. Y. Ng, Q. Z. Chan, Y. Lu, Y. Si, F. Zhang, and J. S. Dong, “Phishpedia: A hybrid deep learning based approach to visually identify phishing webpages.” in USENIX Security Symposium, 2021, pp. 3793–3810.
  15. R. Liu, Y. Lin, X. Yang, S. H. Ng, D. M. Divakaran, and J. S. Dong, “Inferring phishing intention via webpage appearance and dynamics: A deep vision based approach,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 1633–1650.
  16. “TensorFlowJS,” https://www.tensorflow.org/js.
  17. “Chromium blog: Faster and more efficient phishing detection in m92,” https://blog.chromium.org/2021/07/m92-faster-and-more-efficient-phishing-detection.html, 2023.
  18. “Phishtank: Phishing intelligence,” https://phishtank.com/.
  19. “Openphish: Phishing intelligence,” https://openphish.com/.
  20. “Puppeteer — Puppeteer — pptr.dev,” https://pptr.dev, [Accessed 13-Jun-2023].
  21. “puppeteer-extra-plugin-stealth — npmjs.com,” https://www.npmjs.com/package/puppeteer-extra-plugin-stealth, [Accessed 13-Jun-2023].
  22. StatCounter, “Desktop Screen Resolution Stats Worldwide, December 2020,” https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide, [Accessed 21-Jun-2023].
  23. K. Krippendorff, “Computing krippendorff’s alpha-reliability,” 2011.
  24. K. De Swert, “Calculating inter-coder reliability in media content analysis using krippendorff’s alpha,” Center for Politics and Communication, vol. 15, pp. 1–15, 2012.
  25. “A research-oriented top sites ranking hardened against manipulation - Tranco — tranco-list.eu,” https://tranco-list.eu, [Accessed 13-Jun-2023].
  26. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  27. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
  28. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
  29. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016.   IEEE Computer Society, 2016, pp. 770–778. [Online]. Available: https://doi.org/10.1109/CVPR.2016.90
  30. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
  31. S. Bianco, R. Cadène, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access, vol. 6, pp. 64 270–64 277, 2018. [Online]. Available: https://doi.org/10.1109/ACCESS.2018.2877890
  32. “Distributed training with tensorflow,” https://www.tensorflow.org/guide/distributed_training.
  33. B. McMahan and D. Ramage, “Federated learning: Collaborative machine learning without centralized training data,” Google Research Blog, vol. 3, 2017.
  34. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” CoRR, vol. abs/1704.04861, 2017. [Online]. Available: http://arxiv.org/abs/1704.04861
  35. “puppeteer-extra-plugin-stealth,” https://www.npmjs.com/package/puppeteer-extra-plugin-stealth.
  36. “PublicWWW - source code search engine,” https://publicwww.com/.
  37. R. Geirhos, J. Jacobsen, C. Michaelis, R. S. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann, “Shortcut learning in deep neural networks,” CoRR, vol. abs/2004.07780, 2020. [Online]. Available: https://arxiv.org/abs/2004.07780
  38. K. Team, “Keras documentation: Keras Applications — keras.io,” https://keras.io/api/applications/, [Accessed 21-Jun-2023].
  39. A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv preprint arXiv:1810.00069, 2018.
  40. T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Z.-H. Zhou, Ed.   International Joint Conferences on Artificial Intelligence Organization, 8 2021, pp. 4312–4321, survey Track. [Online]. Available: https://doi.org/10.24963/ijcai.2021/591
  41. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  42. J. Rauber, R. Zimmermann, M. Bethge, and W. Brendel, “Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax,” Journal of Open Source Software, vol. 5, no. 53, p. 2607, 2020. [Online]. Available: https://doi.org/10.21105/joss.02607
  43. J. Rauber, W. Brendel, and M. Bethge, “Foolbox: A python toolbox to benchmark the robustness of machine learning models,” in Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning, 2017. [Online]. Available: http://arxiv.org/abs/1707.04131
  44. “TensorFlow.js — Machine Learning for JavaScript Developers — tensorflow.org,” https://www.tensorflow.org/js/, [Accessed 21-Jun-2023].
  45. M. Z. Rafique, T. Van Goethem, W. Joosen, C. Huygens, and N. Nikiforakis, “It’s free for a reason: Exploring the ecosystem of free live streaming services,” in Proceedings of the 23rd Network and Distributed System Security Symposium (NDSS 2016).   Internet Society, 2016, pp. 1–15.
  46. X. Qi, T. Xie, J. T. Wang, T. Wu, S. Mahloujifar, and P. Mittal, “Towards a proactive ML approach for detecting backdoor poison samples,” in 32nd USENIX Security Symposium (USENIX Security 23).   Anaheim, CA: USENIX Association, Aug. 2023, pp. 1685–1702. [Online]. Available: https://www.usenix.org/conference/usenixsecurity23/presentation/qi
  47. T. D. Nguyen, P. Rieger, H. Chen, H. Yalame, H. Möllering, H. Fereidooni, S. Marchal, M. Miettinen, A. Mirhoseini, S. Zeitouni, F. Koushanfar, A. Sadeghi, and T. Schneider, “Flame: Taming backdoors in federated learning,” in USENIX Security Symposium, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:263886687
  48. Z. Zhang, X. Cao, J. Jia, and N. Z. Gong, “Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ser. KDD ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 2545–2555. [Online]. Available: https://doi.org/10.1145/3534678.3539231
  49. W. Syafitri, Z. Shukur, U. Asma’Mokhtar, R. Sulaiman, and M. A. Ibrahim, “Social engineering attacks prevention: A systematic literature review,” IEEE Access, vol. 10, pp. 39 325–39 343, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Irfan Ozen (1 paper)
  2. Karthika Subramani (5 papers)
  3. Phani Vadrevu (4 papers)
  4. Roberto Perdisci (9 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets