Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Chatbots to PhishBots? -- Preventing Phishing scams created using ChatGPT, Google Bard and Claude (2310.19181v2)

Published 29 Oct 2023 in cs.CR and cs.CL

Abstract: The advanced capabilities of LLMs have made them invaluable across various applications, from conversational agents and content creation to data analysis, research, and innovation. However, their effectiveness and accessibility also render them susceptible to abuse for generating malicious content, including phishing attacks. This study explores the potential of using four popular commercially available LLMs, i.e., ChatGPT (GPT 3.5 Turbo), GPT 4, Claude, and Bard, to generate functional phishing attacks using a series of malicious prompts. We discover that these LLMs can generate both phishing websites and emails that can convincingly imitate well-known brands and also deploy a range of evasive tactics that are used to elude detection mechanisms employed by anti-phishing systems. These attacks can be generated using unmodified or "vanilla" versions of these LLMs without requiring any prior adversarial exploits such as jailbreaking. We evaluate the performance of the LLMs towards generating these attacks and find that they can also be utilized to create malicious prompts that, in turn, can be fed back to the model to generate phishing scams - thus massively reducing the prompt-engineering effort required by attackers to scale these threats. As a countermeasure, we build a BERT-based automated detection tool that can be used for the early detection of malicious prompts to prevent LLMs from generating phishing content. Our model is transferable across all four commercial LLMs, attaining an average accuracy of 96% for phishing website prompts and 94% for phishing email prompts. We also disclose the vulnerabilities to the concerned LLMs, with Google acknowledging it as a severe issue. Our detection model is available for use at Hugging Face, as well as a ChatGPT Actions plugin.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (119)
  1. M. Southern. (2021) Chatgpt examples: 5 ways businesses are using openai’s language model. [Online]. Available: https://www.searchenginejournal.com/chatgpt-examples/474937/
  2. S. Jalil, S. Rafi, T. D. LaToza, K. Moran, and W. Lam, “Chatgpt and software testing education: Promises & perils,” arXiv preprint arXiv:2302.03287, 2023.
  3. J. Qadir, “Engineering education in the era of chatgpt: Promise and pitfalls of generative ai for education,” 2022.
  4. S. Biswas, “Chatgpt and the future of medical writing,” p. 223312, 2023.
  5. “Ai like chatgpt is creating huge increase in malicious phishing emails,” CNBC, Nov. 2023, retrieved from https://www.cnbc.com/2023/11/28/ai-like-chatgpt-is-creating-huge-increase-in-malicious-phishing-email.html [accessed December 6, 2023].
  6. “Report links chatgpt to 1,265
  7. “Fraudgpt and wormgpt: Ai-driven tools that help attackers conduct phishing campaigns,” SecureOps Managed Security Support Services Monthly Blog Articles, Oct. 2023. [Online]. Available: https://secureops.com/blog/ai-attacks-fraudgpt/
  8. MiniTool. (2022, February) ChatGPT: This content may violate our content policy. MiniTool. [Online]. Available: https://www.minitool.com/news/chatgpt-this-content-may-violate-our-content-policy.html
  9. OpenAI, “Openai usage policies,” 2021. [Online]. Available: https://openai.com/policies/usage-policies/
  10. R. Karanjai, “Targeted phishing campaigns using large scale language models,” arXiv preprint arXiv:2301.00665, 2022.
  11. C. Hoffman, “It’s scary easy to use chatgpt to write phishing emails,” CNET, October 2021. [Online]. Available: https://cnet.co/3J72IPV
  12. E. Kovacs. (2021, September) Malicious prompt engineering with ChatGPT. SecurityWeek. [Online]. Available: https://www.securityweek.com/malicious-prompt-engineering-with-chatgpt/
  13. T. Tucker, “A consumer-protection agency warns that scammers are using ai to make their schemes more convincing and dangerous,” Business Insider, March 2023. [Online]. Available: https://bit.ly/3YFu5WN
  14. M. Shkatov. (2018, January) Chatting our way into creating a polymorphic malware. CyberArk. [Online]. Available: https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware
  15. L. Cohen. (2021, June) Chatgpt hack allows chatbot to generate malware. [Online]. Available: https://www.digitaltrends.com/computing/chatgpt-hack-allows-chatbot-to-generate-malware/
  16. K. Alper and I. Cohen, “Opwnai: Cybercriminals starting to use gpt for impersonation and social engineering,” Check Point Research, March 2023. [Online]. Available: https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/
  17. F. Lai, “The carbon footprint of GPT-4,” Towards Data Science, 2022. [Online]. Available: https://towardsdatascience.com/the-carbon-footprint-of-gpt-4-d6c676eb21ae
  18. J. Doe, “ChatGPT vs Microsoft Copilot: The major differences,” UC Today, 2023. [Online]. Available: https://www.uctoday.com/unified-communications/chatgpt-vs-microsoft-copilot-the-major-differences/
  19. C. Software, “What is phishing?” 2023. [Online]. Available: https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-phishing/
  20. J. S. Downs, M. Holbrook, and L. F. Cranor, “Behavioral response to phishing risk,” in Proceedings of the anti-phishing working groups 2nd annual eCrime researchers summit, 2007, pp. 37–44.
  21. J. Erkkila, “Why we fall for phishing,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems CHI 2011.   ACM, 2011, pp. 7–12.
  22. M. Butavicius, R. Taib, and S. J. Han, “Why people keep falling for phishing scams: The effects of time pressure and deception cues on the detection of phishing emails,” Computers & Security, vol. 123, p. 102937, 2022.
  23. Z. Alkhalil, C. Hewage, L. Nawaf, and I. Khan, “Phishing attacks: A recent comprehensive study and a new anatomy,” Frontiers in Computer Science, vol. 3, p. 563060, 2021.
  24. J. Doe, “The phishing landscape 2023,” Interisle Consulting Group, Tech. Rep., 2023. [Online]. Available: https://interisle.net/PhishingLandscape2023.pdf
  25. B. T. Light, https://www.bitdefender.com/solutions/trafficlight.html.
  26. “Mcafee WebAdvisor,” https://www.mcafee.com/en-us/safe-browser/mcafee-webadvisor.html, 2022.
  27. “PhishTank,” https://www.phishtank.com/faq.php, 2020.
  28. Openphish, “Phishing feed,” "https://openphish.com/faq.html".
  29. A. Oest, Y. Safaei, P. Zhang, B. Wardman, K. Tyers, Y. Shoshitaishvili, and A. Doupé, “Phishtime: Continuous longitudinal measurement of the effectiveness of anti-phishing blacklists,” in 29th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 20), 2020, pp. 379–396.
  30. P. Zhang, A. Oest, H. Cho, Z. Sun, R. Johnson, B. Wardman, S. Sarker, A. Kapravelos, T. Bao, R. Wang et al., “Crawlphish: Large-scale analysis of client-side cloaking techniques in phishing,” in 2021 IEEE Symposium on Security and Privacy (SP).   IEEE, 2021, pp. 1109–1124.
  31. A. Oest, P. Zhang, B. Wardman, E. Nunes, J. Burgis, A. Zand, K. Thomas, A. Doupé, and G.-J. Ahn, “Sunrise to sunset: Analyzing the end-to-end life cycle and effectiveness of phishing attacks at scale,” in 29th USENIX Security Symposium (USENIX Security 20), 2020.
  32. D. Akhawe and A. P. Felt, “Alice in warningland: A large-scale field study of browser security warning effectiveness,” in Presented as part of the 22nd {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 13), 2013, pp. 257–272.
  33. P. T. I. Team. (2023) Have a money latte? then you too can buy a phish kit. [Online]. Available: https://www.proofpoint.com/us/blog/threat-insight/have-money-latte-then-you-too-can-buy-phish-kit
  34. A. Oest, Y. Safei, A. Doupé, G.-J. Ahn, B. Wardman, and G. Warner, “Inside a phisher’s mind: Understanding the anti-phishing ecosystem through phishing kit analysis,” in 2018 APWG Symposium on Electronic Crime Research (eCrime).   IEEE, 2018, pp. 1–12.
  35. H. Bijmans, T. Booij, A. Schwedersky, A. Nedgabat, and R. van Wegberg, “Catching phishers by their bait: Investigating the dutch phishing landscape through phishing kit detection,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 3757–3774.
  36. X. Han, N. Kheir, and D. Balzarotti, “Phisheye: Live monitoring of sandboxed phishing kits,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 1402–1413.
  37. L. Zhong and Z. Wang, “A study on robustness and reliability of large language model code generation,” arXiv preprint arXiv:2308.10335, 2023.
  38. J. Liu, C. S. Xia, Y. Wang, and L. Zhang, “Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation,” arXiv preprint arXiv:2305.01210, 2023.
  39. APWG, “ecrimex,” https://apwg.org/ecx/.
  40. M. Das, S. K. Pandey, and A. Mukherjee, “Evaluating chatgpt’s performance for multilingual and emoji-based hate speech detection,” arXiv preprint arXiv:2305.13276, 2023.
  41. K. M. Caramancion, “Harnessing the power of chatgpt to decimate mis/disinformation: Using chatgpt for fake news detection,” in 2023 IEEE World AI IoT Congress (AIIoT).   IEEE, 2023, pp. 0042–0046.
  42. G. Deiana, M. Dettori, A. Arghittu, A. Azara, G. Gabutti, and P. Castiglia, “Artificial intelligence and public health: Evaluating chatgpt responses to vaccination myths and misconceptions,” Vaccines, vol. 11, no. 7, p. 1217, 2023.
  43. Anthropic, “Claude-intro,” 2023. [Online]. Available: https://www.anthropic.com/index/introducing-claude
  44. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023. [Online]. Available: https://arxiv.org/abs/2302.13971
  45. Google, “Bard-google-ai,” 2023. [Online]. Available: https://blog.google/technology/ai/bard-google-ai-search-updates/
  46. L. Yunxiang, L. Zihan, Z. Kai, D. Ruilong, and Z. You, “Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge,” arXiv preprint arXiv:2303.14070, 2023. [Online]. Available: https://arxiv.org/abs/2303.14070
  47. C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, “Pmc-llama: Further finetuning llama on medical papers,” arXiv preprint arXiv:2304.14454, 2023. [Online]. Available: https://arxiv.org/abs/2304.14454
  48. H. Li, D. Guo, W. Fan, M. Xu, and Y. Song, “Multi-step jailbreaking privacy attacks on chatgpt,” arXiv preprint arXiv:2304.05197, 2023. [Online]. Available: https://arxiv.org/abs/2304.05197
  49. X. Shen, Z. Chen, M. Backes, Y. Shen, and Y. Zhang, “” do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models,” arXiv preprint arXiv:2308.03825, 2023.
  50. Y. Liu, G. Deng, Y. Li, K. Wang, T. Zhang, Y. Liu, H. Wang, Y. Zheng, and Y. Liu, “Prompt injection attack against llm-integrated applications,” arXiv preprint arXiv:2306.05499, 2023.
  51. K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection,” 2023.
  52. D. Kang, X. Li, I. Stoica, C. Guestrin, M. Zaharia, and T. Hashimoto, “Exploiting programmatic behavior of llms: Dual-use through standard security attacks,” arXiv preprint arXiv:2302.05733, 2023. [Online]. Available: https://arxiv.org/abs/2302.05733
  53. M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
  54. E. Derner and K. Batistič, “Beyond the safeguards: Exploring the security risks of chatgpt,” arXiv preprint arXiv:2305.08005, 2023.
  55. L. De Angelis, F. Baglivo, G. Arzilli, G. P. Privitera, P. Ferragina, A. E. Tozzi, and C. Rizzo, “Chatgpt and the rise of large language models: the new ai-driven infodemic threat in public health,” Frontiers in Public Health, vol. 11, p. 1166120, 2023.
  56. A. Cidon, L. Gavish, I. Bleier, N. Korshun, M. Schweighauser, and A. Tsitkin, “High precision detection of business email compromise,” in 28th USENIX Security Symposium (USENIX Security 19).   Santa Clara, CA: USENIX Association, Aug. 2019, pp. 1291–1307. [Online]. Available: https://www.usenix.org/conference/usenixsecurity19/presentation/cidon
  57. G. Ho, A. Cidon, L. Gavish, M. Schweighauser, V. Paxson, S. Savage, G. M. Voelker, and D. Wagner, “Detecting and characterizing lateral phishing at scale,” in 28th USENIX Security Symposium (USENIX Security 19).   Santa Clara, CA: USENIX Association, Aug. 2019, pp. 1273–1290. [Online]. Available: https://www.usenix.org/conference/usenixsecurity19/presentation/ho
  58. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  59. D. O. Otieno, A. S. Namin, and K. S. Jones, “The application of the bert transformer model for phishing email classification,” in 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC).   IEEE, 2023, pp. 1303–1310.
  60. B. Karki, F. Abri, A. S. Namin, and K. S. Jones, “Using transformers for identification of persuasion principles in phishing emails,” in 2022 IEEE International Conference on Big Data (Big Data).   IEEE, 2022, pp. 2841–2848.
  61. N. Rifat, M. Ahsan, M. Chowdhury, and R. Gomes, “Bert against social engineering attack: Phishing text detection,” in 2022 IEEE International Conference on Electro Information Technology (eIT).   IEEE, 2022, pp. 1–6.
  62. C. Oswald, S. E. Simon, and A. Bhattacharya, “Spotspam: Intention analysis–driven sms spam detection using bert embeddings,” ACM Transactions on the Web (TWEB), vol. 16, no. 3, pp. 1–27, 2022.
  63. V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.
  64. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  65. D. He, X. Lv, S. Zhu, S. Chan, and K.-K. R. Choo, “A method for detecting phishing websites based on tiny-bert stacking,” IEEE Internet of Things Journal, 2023.
  66. Y. Wang, W. Zhu, H. Xu, Z. Qin, K. Ren, and W. Ma, “A large-scale pretrained deep model for phishing url detection,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  67. OpenAI, “Openai api,” 2023. [Online]. Available: https://openai.com/blog/introducing-chatgpt-and-whisper-apis
  68. B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in European conference on machine learning.   Springer, 2004, pp. 217–226.
  69. R. Alabdan, “Phishing attacks survey: Types, vectors, and technical approaches,” Future internet, vol. 12, no. 10, p. 168, 2020.
  70. G. Varshney, M. Misra, and P. K. Atrey, “A survey and classification of web phishing detection schemes,” Security and Communication Networks, vol. 9, no. 18, pp. 6266–6284, 2016.
  71. L. Kang and J. Xiang, “Captcha phishing: A practical attack on human interaction proofing,” in Proceedings of the 5th international conference on Information security and cryptology, 2009, pp. 411–425.
  72. ——, “Captcha phishing: a practical attack on human interaction proofing,” in Information Security and Cryptology: 5th International Conference, Inscrypt 2009, Beijing, China, December 12-15, 2009. Revised Selected Papers 5.   Springer, 2010, pp. 411–425.
  73. Palo Alto Networks Unit 42, “Captcha-protected phishing: What you need to know,” https://unit42.paloaltonetworks.com/captcha-protected-phishing/, June 2021, [Accessed: March 9, 2023].
  74. S. Blog, “Dissecting a phishing campaign with a captcha-based url,” Trustwave, March 2021. [Online]. Available: https://bit.ly/3mDvH6q
  75. A. Odeh, I. Keshta, and E. Abdelfattah, “Machine learningtechniquesfor detection of website phishing: A review for promises and challenges,” in 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC).   IEEE, 2021, pp. 0813–0818.
  76. G. Developers, “recaptcha v3: Add the recaptcha script to your html or php file,” https://developers.google.com/recaptcha/docs/display, September 2021, [Online; accessed 9-March-2023].
  77. M. Morgan, “Qr code phishing scams target users and enterprise organizations,” Security Magazine, October 2021, [Online; accessed 9-March-2023]. [Online]. Available: https://www.securitymagazine.com/articles/97949-qr-code-phishing-scams-target-users-and-enterprise-organizations
  78. M. Kan, “Fbi: Hackers are compromising legit qr codes to send you to phishing sites,” PCMag, May 2022, [Online; accessed 9-March-2023]. [Online]. Available: https://www.pcmag.com/news/fbi-hackers-are-compromising-legit-qr-codes-to-send-you-to-phishing-sites
  79. T. Vidas, E. Owusu, S. Wang, C. Zeng, L. F. Cranor, and N. Christin, “Qrishing: The susceptibility of smartphone users to qr code phishing attacks,” in Financial Cryptography and Data Security: FC 2013 Workshops, USEC and WAHC 2013, Okinawa, Japan, April 1, 2013, Revised Selected Papers 17.   Springer, 2013, pp. 52–69.
  80. QRCode Monkey, “QR Server,” https://www.qrserver.com/, Accessed on March 8, 2023.
  81. S. Team, “iframe injection attacks and mitigation,” SecNHack, February 2022, [Online; accessed 9-March-2023]. [Online]. Available: https://secnhack.in/iframe-injection-attacks-and-mitigation/
  82. Auth0. (2021, June) Preventing clickjacking attacks. [Online]. Available: https://auth0.com/blog/preventing-clickjacking-attacks/
  83. PortSwigger, “Same-origin policy,” https://portswigger.net/web-security/cors/same-origin-policy, 2023, [Online; accessed 9-March-2023].
  84. B. Liang, M. Su, W. You, W. Shi, and G. Yang, “Cracking classifiers for evasion: A case study on the google’s phishing pages filter,” in Proceedings of the 25th International Conference on World Wide Web, 2016, pp. 345–356.
  85. mrd0x, “Browser in the Browser: Phishing Attack,” https://mrd0x.com/browser-in-the-browser-phishing-attack/, January 2022, accessed on April 28, 2023.
  86. Cofense, “Global polymorphic phishing attack 2022,” https://bit.ly/3ZVtu4t, March 2022, [Accessed: March 9, 2023].
  87. I.-F. Lam, W.-C. Xiao, S.-C. Wang, and K.-T. Chen, “Counteracting phishing page polymorphism: An image layout analysis approach,” in Advances in Information Security and Assurance: Third International Conference and Workshops, ISA 2009, Seoul, Korea, June 25-27, 2009. Proceedings 3.   Springer, 2009, pp. 270–279.
  88. C. Ventures, “Beware of lookalike domains in punycode phishing attacks,” Cybersecurity Ventures, 2019. [Online]. Available: https://cybersecurityventures.com/beware-of-lookalike-domains-in-punycode-phishing-attacks/
  89. B. Fouss, D. M. Ross, A. B. Wollaber, and S. R. Gomez, “Punyvis: A visual analytics approach for identifying homograph phishing attacks,” in 2019 IEEE Symposium on Visualization for Cyber Security (VizSec).   IEEE, 2019, pp. 1–10.
  90. Adobe, “Responsive web design,” https://xd.adobe.com/ideas/principles/web-design/responsive-web-design-2/, July 2021, [Accessed on 9 March 2023].
  91. Bootstrap, “Bootstrap,” https://getbootstrap.com/, 2023, [Accessed on 9 March 2023].
  92. Foundation, “Foundation,” https://get.foundation/, 2023, [Accessed on 9 March 2023].
  93. S. Afroz and R. Greenstadt, “Phishzoo: Detecting phishing websites by looking at them,” in 2011 IEEE fifth international conference on semantic computing.   IEEE, 2011, pp. 368–375.
  94. B. E. Gavett, R. Zhao, S. E. John, C. A. Bussell, J. R. Roberts, and C. Yue, “Phishing suspiciousness in older and younger adults: The role of executive functioning,” Plos one, vol. 12, no. 2, p. e0171620, 2017.
  95. D. Lacey, P. Salmon, and P. Glancy, “Taking the bait: a systems analysis of phishing attacks,” Procedia Manufacturing, vol. 3, pp. 1109–1116, 2015.
  96. J. Mao, W. Tian, P. Li, T. Wei, and Z. Liang, “Phishing-alarm: robust and efficient phishing detection via page component similarity,” IEEE Access, vol. 5, pp. 17 020–17 030, 2017.
  97. “Hostinger,” https://www.hostinger.com/.
  98. D. Jampen, G. Gür, T. Sutter, and B. Tellenbach, “Don’t click: towards an effective anti-phishing training. a comparative literature review,” Human-centric Computing and Information Sciences, vol. 10, no. 1, pp. 1–41, 2020.
  99. A. Oest, Y. Safaei, A. Doupé, G.-J. Ahn, B. Wardman, and K. Tyers, “Phishfarm: A scalable framework for measuring the effectiveness of evasion techniques against browser phishing blacklists,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 1344–1361.
  100. “Google Safebrowsing,” https://safebrowsing.google.com/, 2020.
  101. “VirusTotal,” https://www.virustotal.com/gui/home/, 2020.
  102. A. K. Jain and B. Gupta, “A survey of phishing attack techniques, defence mechanisms and open research challenges,” Enterprise Information Systems, vol. 16, no. 4, pp. 527–565, 2022.
  103. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” 2002. [Online]. Available: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
  104. C.-Y. Lin, “ROUGE: A Package for Automatic Evaluation of Summaries,” 2004. [Online]. Available: https://medium.com/nlplanet/two-minutes-nlp-learn-the-rouge-metric-by-examples-f179cc285499
  105. P. Dutta, “Perplexity of language models,” Medium, 2021. [Online]. Available: https://medium.com/@priyankads/perplexity-of-language-models-41160427ed72
  106. F. Rosner, A. Hinneburg, M. Röder, M. Nettling, and A. Both, “Evaluating topic coherence measures,” arXiv preprint arXiv:1403.6397, 2014.
  107. OpenAI, “Openai gpt-3.5 models,” 2022. [Online]. Available: https://platform.openai.com/docs/models/gpt-3-5
  108. ——, “Gpt-4 technical report,” 2023.
  109. OpenPhish, “Phishing activity tracked by openphish,” 2023. [Online]. Available: https://openphish.com/phishing_activity.html
  110. W. Dai, G.-R. Xue, Q. Yang, and Y. Yu, “Transferring naive bayes classifiers for text classification,” in AAAI, vol. 7, 2007, pp. 540–545.
  111. Z. Liu, X. Lv, K. Liu, and S. Shi, “Study on svm compared with the other text classification methods,” in 2010 Second international workshop on education technology and computer science, vol. 1.   IEEE, 2010, pp. 219–222.
  112. X. Sun, L. Tu, J. Zhang, J. Cai, B. Li, and Y. Wang, “Assbert: Active and semi-supervised bert for smart contract vulnerability detection,” Journal of Information Security and Applications, vol. 73, p. 103423, 2023.
  113. M. B. Messaoud, A. Miladi, I. Jenhani, M. W. Mkaouer, and L. Ghadhab, “Duplicate bug report detection using an attention-based neural language model,” IEEE Transactions on Reliability, 2022.
  114. K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre-training text encoders as discriminators rather than generators,” arXiv preprint arXiv:2003.10555, 2020.
  115. P. He, X. Liu, J. Gao, and W. Chen, “Deberta: Decoding-enhanced bert with disentangled attention,” arXiv preprint arXiv:2006.03654, 2020.
  116. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” Advances in neural information processing systems, vol. 32, 2019.
  117. “Bugcrowd,” https://bugcrowd.com/openai.
  118. Y. Lin, R. Liu, D. M. Divakaran, J. Y. Ng, Q. Z. Chan, Y. Lu, Y. Si, F. Zhang, and J. S. Dong, “Phishpedia: A hybrid deep learning based approach to visually identify phishing webpages.” in USENIX Security Symposium, 2021, pp. 3793–3810.
  119. R. Liu, Y. Lin, X. Yang, S. H. Ng, D. M. Divakaran, and J. S. Dong, “Inferring phishing intention via webpage appearance and dynamics: A deep vision based approach,” in 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21), 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sayak Saha Roy (11 papers)
  2. Poojitha Thota (5 papers)
  3. Krishna Vamsi Naragam (2 papers)
  4. Shirin Nilizadeh (32 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com