From Chatbots to PhishBots? -- Preventing Phishing scams created using ChatGPT, Google Bard and Claude (2310.19181v2)
Abstract: The advanced capabilities of LLMs have made them invaluable across various applications, from conversational agents and content creation to data analysis, research, and innovation. However, their effectiveness and accessibility also render them susceptible to abuse for generating malicious content, including phishing attacks. This study explores the potential of using four popular commercially available LLMs, i.e., ChatGPT (GPT 3.5 Turbo), GPT 4, Claude, and Bard, to generate functional phishing attacks using a series of malicious prompts. We discover that these LLMs can generate both phishing websites and emails that can convincingly imitate well-known brands and also deploy a range of evasive tactics that are used to elude detection mechanisms employed by anti-phishing systems. These attacks can be generated using unmodified or "vanilla" versions of these LLMs without requiring any prior adversarial exploits such as jailbreaking. We evaluate the performance of the LLMs towards generating these attacks and find that they can also be utilized to create malicious prompts that, in turn, can be fed back to the model to generate phishing scams - thus massively reducing the prompt-engineering effort required by attackers to scale these threats. As a countermeasure, we build a BERT-based automated detection tool that can be used for the early detection of malicious prompts to prevent LLMs from generating phishing content. Our model is transferable across all four commercial LLMs, attaining an average accuracy of 96% for phishing website prompts and 94% for phishing email prompts. We also disclose the vulnerabilities to the concerned LLMs, with Google acknowledging it as a severe issue. Our detection model is available for use at Hugging Face, as well as a ChatGPT Actions plugin.
- M. Southern. (2021) Chatgpt examples: 5 ways businesses are using openai’s language model. [Online]. Available: https://www.searchenginejournal.com/chatgpt-examples/474937/
- S. Jalil, S. Rafi, T. D. LaToza, K. Moran, and W. Lam, “Chatgpt and software testing education: Promises & perils,” arXiv preprint arXiv:2302.03287, 2023.
- J. Qadir, “Engineering education in the era of chatgpt: Promise and pitfalls of generative ai for education,” 2022.
- S. Biswas, “Chatgpt and the future of medical writing,” p. 223312, 2023.
- “Ai like chatgpt is creating huge increase in malicious phishing emails,” CNBC, Nov. 2023, retrieved from https://www.cnbc.com/2023/11/28/ai-like-chatgpt-is-creating-huge-increase-in-malicious-phishing-email.html [accessed December 6, 2023].
- “Report links chatgpt to 1,265
- “Fraudgpt and wormgpt: Ai-driven tools that help attackers conduct phishing campaigns,” SecureOps Managed Security Support Services Monthly Blog Articles, Oct. 2023. [Online]. Available: https://secureops.com/blog/ai-attacks-fraudgpt/
- MiniTool. (2022, February) ChatGPT: This content may violate our content policy. MiniTool. [Online]. Available: https://www.minitool.com/news/chatgpt-this-content-may-violate-our-content-policy.html
- OpenAI, “Openai usage policies,” 2021. [Online]. Available: https://openai.com/policies/usage-policies/
- R. Karanjai, “Targeted phishing campaigns using large scale language models,” arXiv preprint arXiv:2301.00665, 2022.
- C. Hoffman, “It’s scary easy to use chatgpt to write phishing emails,” CNET, October 2021. [Online]. Available: https://cnet.co/3J72IPV
- E. Kovacs. (2021, September) Malicious prompt engineering with ChatGPT. SecurityWeek. [Online]. Available: https://www.securityweek.com/malicious-prompt-engineering-with-chatgpt/
- T. Tucker, “A consumer-protection agency warns that scammers are using ai to make their schemes more convincing and dangerous,” Business Insider, March 2023. [Online]. Available: https://bit.ly/3YFu5WN
- M. Shkatov. (2018, January) Chatting our way into creating a polymorphic malware. CyberArk. [Online]. Available: https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware
- L. Cohen. (2021, June) Chatgpt hack allows chatbot to generate malware. [Online]. Available: https://www.digitaltrends.com/computing/chatgpt-hack-allows-chatbot-to-generate-malware/
- K. Alper and I. Cohen, “Opwnai: Cybercriminals starting to use gpt for impersonation and social engineering,” Check Point Research, March 2023. [Online]. Available: https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/
- F. Lai, “The carbon footprint of GPT-4,” Towards Data Science, 2022. [Online]. Available: https://towardsdatascience.com/the-carbon-footprint-of-gpt-4-d6c676eb21ae
- J. Doe, “ChatGPT vs Microsoft Copilot: The major differences,” UC Today, 2023. [Online]. Available: https://www.uctoday.com/unified-communications/chatgpt-vs-microsoft-copilot-the-major-differences/
- C. Software, “What is phishing?” 2023. [Online]. Available: https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-phishing/
- J. S. Downs, M. Holbrook, and L. F. Cranor, “Behavioral response to phishing risk,” in Proceedings of the anti-phishing working groups 2nd annual eCrime researchers summit, 2007, pp. 37–44.
- J. Erkkila, “Why we fall for phishing,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems CHI 2011. ACM, 2011, pp. 7–12.
- M. Butavicius, R. Taib, and S. J. Han, “Why people keep falling for phishing scams: The effects of time pressure and deception cues on the detection of phishing emails,” Computers & Security, vol. 123, p. 102937, 2022.
- Z. Alkhalil, C. Hewage, L. Nawaf, and I. Khan, “Phishing attacks: A recent comprehensive study and a new anatomy,” Frontiers in Computer Science, vol. 3, p. 563060, 2021.
- J. Doe, “The phishing landscape 2023,” Interisle Consulting Group, Tech. Rep., 2023. [Online]. Available: https://interisle.net/PhishingLandscape2023.pdf
- B. T. Light, https://www.bitdefender.com/solutions/trafficlight.html.
- “Mcafee WebAdvisor,” https://www.mcafee.com/en-us/safe-browser/mcafee-webadvisor.html, 2022.
- “PhishTank,” https://www.phishtank.com/faq.php, 2020.
- Openphish, “Phishing feed,” "https://openphish.com/faq.html".
- A. Oest, Y. Safaei, P. Zhang, B. Wardman, K. Tyers, Y. Shoshitaishvili, and A. Doupé, “Phishtime: Continuous longitudinal measurement of the effectiveness of anti-phishing blacklists,” in 29th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 20), 2020, pp. 379–396.
- P. Zhang, A. Oest, H. Cho, Z. Sun, R. Johnson, B. Wardman, S. Sarker, A. Kapravelos, T. Bao, R. Wang et al., “Crawlphish: Large-scale analysis of client-side cloaking techniques in phishing,” in 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021, pp. 1109–1124.
- A. Oest, P. Zhang, B. Wardman, E. Nunes, J. Burgis, A. Zand, K. Thomas, A. Doupé, and G.-J. Ahn, “Sunrise to sunset: Analyzing the end-to-end life cycle and effectiveness of phishing attacks at scale,” in 29th USENIX Security Symposium (USENIX Security 20), 2020.
- D. Akhawe and A. P. Felt, “Alice in warningland: A large-scale field study of browser security warning effectiveness,” in Presented as part of the 22nd {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 13), 2013, pp. 257–272.
- P. T. I. Team. (2023) Have a money latte? then you too can buy a phish kit. [Online]. Available: https://www.proofpoint.com/us/blog/threat-insight/have-money-latte-then-you-too-can-buy-phish-kit
- A. Oest, Y. Safei, A. Doupé, G.-J. Ahn, B. Wardman, and G. Warner, “Inside a phisher’s mind: Understanding the anti-phishing ecosystem through phishing kit analysis,” in 2018 APWG Symposium on Electronic Crime Research (eCrime). IEEE, 2018, pp. 1–12.
- H. Bijmans, T. Booij, A. Schwedersky, A. Nedgabat, and R. van Wegberg, “Catching phishers by their bait: Investigating the dutch phishing landscape through phishing kit detection,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 3757–3774.
- X. Han, N. Kheir, and D. Balzarotti, “Phisheye: Live monitoring of sandboxed phishing kits,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 1402–1413.
- L. Zhong and Z. Wang, “A study on robustness and reliability of large language model code generation,” arXiv preprint arXiv:2308.10335, 2023.
- J. Liu, C. S. Xia, Y. Wang, and L. Zhang, “Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation,” arXiv preprint arXiv:2305.01210, 2023.
- APWG, “ecrimex,” https://apwg.org/ecx/.
- M. Das, S. K. Pandey, and A. Mukherjee, “Evaluating chatgpt’s performance for multilingual and emoji-based hate speech detection,” arXiv preprint arXiv:2305.13276, 2023.
- K. M. Caramancion, “Harnessing the power of chatgpt to decimate mis/disinformation: Using chatgpt for fake news detection,” in 2023 IEEE World AI IoT Congress (AIIoT). IEEE, 2023, pp. 0042–0046.
- G. Deiana, M. Dettori, A. Arghittu, A. Azara, G. Gabutti, and P. Castiglia, “Artificial intelligence and public health: Evaluating chatgpt responses to vaccination myths and misconceptions,” Vaccines, vol. 11, no. 7, p. 1217, 2023.
- Anthropic, “Claude-intro,” 2023. [Online]. Available: https://www.anthropic.com/index/introducing-claude
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023. [Online]. Available: https://arxiv.org/abs/2302.13971
- Google, “Bard-google-ai,” 2023. [Online]. Available: https://blog.google/technology/ai/bard-google-ai-search-updates/
- L. Yunxiang, L. Zihan, Z. Kai, D. Ruilong, and Z. You, “Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge,” arXiv preprint arXiv:2303.14070, 2023. [Online]. Available: https://arxiv.org/abs/2303.14070
- C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, “Pmc-llama: Further finetuning llama on medical papers,” arXiv preprint arXiv:2304.14454, 2023. [Online]. Available: https://arxiv.org/abs/2304.14454
- H. Li, D. Guo, W. Fan, M. Xu, and Y. Song, “Multi-step jailbreaking privacy attacks on chatgpt,” arXiv preprint arXiv:2304.05197, 2023. [Online]. Available: https://arxiv.org/abs/2304.05197
- X. Shen, Z. Chen, M. Backes, Y. Shen, and Y. Zhang, “” do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models,” arXiv preprint arXiv:2308.03825, 2023.
- Y. Liu, G. Deng, Y. Li, K. Wang, T. Zhang, Y. Liu, H. Wang, Y. Zheng, and Y. Liu, “Prompt injection attack against llm-integrated applications,” arXiv preprint arXiv:2306.05499, 2023.
- K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection,” 2023.
- D. Kang, X. Li, I. Stoica, C. Guestrin, M. Zaharia, and T. Hashimoto, “Exploiting programmatic behavior of llms: Dual-use through standard security attacks,” arXiv preprint arXiv:2302.05733, 2023. [Online]. Available: https://arxiv.org/abs/2302.05733
- M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
- E. Derner and K. Batistič, “Beyond the safeguards: Exploring the security risks of chatgpt,” arXiv preprint arXiv:2305.08005, 2023.
- L. De Angelis, F. Baglivo, G. Arzilli, G. P. Privitera, P. Ferragina, A. E. Tozzi, and C. Rizzo, “Chatgpt and the rise of large language models: the new ai-driven infodemic threat in public health,” Frontiers in Public Health, vol. 11, p. 1166120, 2023.
- A. Cidon, L. Gavish, I. Bleier, N. Korshun, M. Schweighauser, and A. Tsitkin, “High precision detection of business email compromise,” in 28th USENIX Security Symposium (USENIX Security 19). Santa Clara, CA: USENIX Association, Aug. 2019, pp. 1291–1307. [Online]. Available: https://www.usenix.org/conference/usenixsecurity19/presentation/cidon
- G. Ho, A. Cidon, L. Gavish, M. Schweighauser, V. Paxson, S. Savage, G. M. Voelker, and D. Wagner, “Detecting and characterizing lateral phishing at scale,” in 28th USENIX Security Symposium (USENIX Security 19). Santa Clara, CA: USENIX Association, Aug. 2019, pp. 1273–1290. [Online]. Available: https://www.usenix.org/conference/usenixsecurity19/presentation/ho
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
- D. O. Otieno, A. S. Namin, and K. S. Jones, “The application of the bert transformer model for phishing email classification,” in 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, 2023, pp. 1303–1310.
- B. Karki, F. Abri, A. S. Namin, and K. S. Jones, “Using transformers for identification of persuasion principles in phishing emails,” in 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022, pp. 2841–2848.
- N. Rifat, M. Ahsan, M. Chowdhury, and R. Gomes, “Bert against social engineering attack: Phishing text detection,” in 2022 IEEE International Conference on Electro Information Technology (eIT). IEEE, 2022, pp. 1–6.
- C. Oswald, S. E. Simon, and A. Bhattacharya, “Spotspam: Intention analysis–driven sms spam detection using bert embeddings,” ACM Transactions on the Web (TWEB), vol. 16, no. 3, pp. 1–27, 2022.
- V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.
- Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
- D. He, X. Lv, S. Zhu, S. Chan, and K.-K. R. Choo, “A method for detecting phishing websites based on tiny-bert stacking,” IEEE Internet of Things Journal, 2023.
- Y. Wang, W. Zhu, H. Xu, Z. Qin, K. Ren, and W. Ma, “A large-scale pretrained deep model for phishing url detection,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- OpenAI, “Openai api,” 2023. [Online]. Available: https://openai.com/blog/introducing-chatgpt-and-whisper-apis
- B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in European conference on machine learning. Springer, 2004, pp. 217–226.
- R. Alabdan, “Phishing attacks survey: Types, vectors, and technical approaches,” Future internet, vol. 12, no. 10, p. 168, 2020.
- G. Varshney, M. Misra, and P. K. Atrey, “A survey and classification of web phishing detection schemes,” Security and Communication Networks, vol. 9, no. 18, pp. 6266–6284, 2016.
- L. Kang and J. Xiang, “Captcha phishing: A practical attack on human interaction proofing,” in Proceedings of the 5th international conference on Information security and cryptology, 2009, pp. 411–425.
- ——, “Captcha phishing: a practical attack on human interaction proofing,” in Information Security and Cryptology: 5th International Conference, Inscrypt 2009, Beijing, China, December 12-15, 2009. Revised Selected Papers 5. Springer, 2010, pp. 411–425.
- Palo Alto Networks Unit 42, “Captcha-protected phishing: What you need to know,” https://unit42.paloaltonetworks.com/captcha-protected-phishing/, June 2021, [Accessed: March 9, 2023].
- S. Blog, “Dissecting a phishing campaign with a captcha-based url,” Trustwave, March 2021. [Online]. Available: https://bit.ly/3mDvH6q
- A. Odeh, I. Keshta, and E. Abdelfattah, “Machine learningtechniquesfor detection of website phishing: A review for promises and challenges,” in 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2021, pp. 0813–0818.
- G. Developers, “recaptcha v3: Add the recaptcha script to your html or php file,” https://developers.google.com/recaptcha/docs/display, September 2021, [Online; accessed 9-March-2023].
- M. Morgan, “Qr code phishing scams target users and enterprise organizations,” Security Magazine, October 2021, [Online; accessed 9-March-2023]. [Online]. Available: https://www.securitymagazine.com/articles/97949-qr-code-phishing-scams-target-users-and-enterprise-organizations
- M. Kan, “Fbi: Hackers are compromising legit qr codes to send you to phishing sites,” PCMag, May 2022, [Online; accessed 9-March-2023]. [Online]. Available: https://www.pcmag.com/news/fbi-hackers-are-compromising-legit-qr-codes-to-send-you-to-phishing-sites
- T. Vidas, E. Owusu, S. Wang, C. Zeng, L. F. Cranor, and N. Christin, “Qrishing: The susceptibility of smartphone users to qr code phishing attacks,” in Financial Cryptography and Data Security: FC 2013 Workshops, USEC and WAHC 2013, Okinawa, Japan, April 1, 2013, Revised Selected Papers 17. Springer, 2013, pp. 52–69.
- QRCode Monkey, “QR Server,” https://www.qrserver.com/, Accessed on March 8, 2023.
- S. Team, “iframe injection attacks and mitigation,” SecNHack, February 2022, [Online; accessed 9-March-2023]. [Online]. Available: https://secnhack.in/iframe-injection-attacks-and-mitigation/
- Auth0. (2021, June) Preventing clickjacking attacks. [Online]. Available: https://auth0.com/blog/preventing-clickjacking-attacks/
- PortSwigger, “Same-origin policy,” https://portswigger.net/web-security/cors/same-origin-policy, 2023, [Online; accessed 9-March-2023].
- B. Liang, M. Su, W. You, W. Shi, and G. Yang, “Cracking classifiers for evasion: A case study on the google’s phishing pages filter,” in Proceedings of the 25th International Conference on World Wide Web, 2016, pp. 345–356.
- mrd0x, “Browser in the Browser: Phishing Attack,” https://mrd0x.com/browser-in-the-browser-phishing-attack/, January 2022, accessed on April 28, 2023.
- Cofense, “Global polymorphic phishing attack 2022,” https://bit.ly/3ZVtu4t, March 2022, [Accessed: March 9, 2023].
- I.-F. Lam, W.-C. Xiao, S.-C. Wang, and K.-T. Chen, “Counteracting phishing page polymorphism: An image layout analysis approach,” in Advances in Information Security and Assurance: Third International Conference and Workshops, ISA 2009, Seoul, Korea, June 25-27, 2009. Proceedings 3. Springer, 2009, pp. 270–279.
- C. Ventures, “Beware of lookalike domains in punycode phishing attacks,” Cybersecurity Ventures, 2019. [Online]. Available: https://cybersecurityventures.com/beware-of-lookalike-domains-in-punycode-phishing-attacks/
- B. Fouss, D. M. Ross, A. B. Wollaber, and S. R. Gomez, “Punyvis: A visual analytics approach for identifying homograph phishing attacks,” in 2019 IEEE Symposium on Visualization for Cyber Security (VizSec). IEEE, 2019, pp. 1–10.
- Adobe, “Responsive web design,” https://xd.adobe.com/ideas/principles/web-design/responsive-web-design-2/, July 2021, [Accessed on 9 March 2023].
- Bootstrap, “Bootstrap,” https://getbootstrap.com/, 2023, [Accessed on 9 March 2023].
- Foundation, “Foundation,” https://get.foundation/, 2023, [Accessed on 9 March 2023].
- S. Afroz and R. Greenstadt, “Phishzoo: Detecting phishing websites by looking at them,” in 2011 IEEE fifth international conference on semantic computing. IEEE, 2011, pp. 368–375.
- B. E. Gavett, R. Zhao, S. E. John, C. A. Bussell, J. R. Roberts, and C. Yue, “Phishing suspiciousness in older and younger adults: The role of executive functioning,” Plos one, vol. 12, no. 2, p. e0171620, 2017.
- D. Lacey, P. Salmon, and P. Glancy, “Taking the bait: a systems analysis of phishing attacks,” Procedia Manufacturing, vol. 3, pp. 1109–1116, 2015.
- J. Mao, W. Tian, P. Li, T. Wei, and Z. Liang, “Phishing-alarm: robust and efficient phishing detection via page component similarity,” IEEE Access, vol. 5, pp. 17 020–17 030, 2017.
- “Hostinger,” https://www.hostinger.com/.
- D. Jampen, G. Gür, T. Sutter, and B. Tellenbach, “Don’t click: towards an effective anti-phishing training. a comparative literature review,” Human-centric Computing and Information Sciences, vol. 10, no. 1, pp. 1–41, 2020.
- A. Oest, Y. Safaei, A. Doupé, G.-J. Ahn, B. Wardman, and K. Tyers, “Phishfarm: A scalable framework for measuring the effectiveness of evasion techniques against browser phishing blacklists,” in 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019, pp. 1344–1361.
- “Google Safebrowsing,” https://safebrowsing.google.com/, 2020.
- “VirusTotal,” https://www.virustotal.com/gui/home/, 2020.
- A. K. Jain and B. Gupta, “A survey of phishing attack techniques, defence mechanisms and open research challenges,” Enterprise Information Systems, vol. 16, no. 4, pp. 527–565, 2022.
- K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” 2002. [Online]. Available: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
- C.-Y. Lin, “ROUGE: A Package for Automatic Evaluation of Summaries,” 2004. [Online]. Available: https://medium.com/nlplanet/two-minutes-nlp-learn-the-rouge-metric-by-examples-f179cc285499
- P. Dutta, “Perplexity of language models,” Medium, 2021. [Online]. Available: https://medium.com/@priyankads/perplexity-of-language-models-41160427ed72
- F. Rosner, A. Hinneburg, M. Röder, M. Nettling, and A. Both, “Evaluating topic coherence measures,” arXiv preprint arXiv:1403.6397, 2014.
- OpenAI, “Openai gpt-3.5 models,” 2022. [Online]. Available: https://platform.openai.com/docs/models/gpt-3-5
- ——, “Gpt-4 technical report,” 2023.
- OpenPhish, “Phishing activity tracked by openphish,” 2023. [Online]. Available: https://openphish.com/phishing_activity.html
- W. Dai, G.-R. Xue, Q. Yang, and Y. Yu, “Transferring naive bayes classifiers for text classification,” in AAAI, vol. 7, 2007, pp. 540–545.
- Z. Liu, X. Lv, K. Liu, and S. Shi, “Study on svm compared with the other text classification methods,” in 2010 Second international workshop on education technology and computer science, vol. 1. IEEE, 2010, pp. 219–222.
- X. Sun, L. Tu, J. Zhang, J. Cai, B. Li, and Y. Wang, “Assbert: Active and semi-supervised bert for smart contract vulnerability detection,” Journal of Information Security and Applications, vol. 73, p. 103423, 2023.
- M. B. Messaoud, A. Miladi, I. Jenhani, M. W. Mkaouer, and L. Ghadhab, “Duplicate bug report detection using an attention-based neural language model,” IEEE Transactions on Reliability, 2022.
- K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre-training text encoders as discriminators rather than generators,” arXiv preprint arXiv:2003.10555, 2020.
- P. He, X. Liu, J. Gao, and W. Chen, “Deberta: Decoding-enhanced bert with disentangled attention,” arXiv preprint arXiv:2006.03654, 2020.
- Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” Advances in neural information processing systems, vol. 32, 2019.
- “Bugcrowd,” https://bugcrowd.com/openai.
- Y. Lin, R. Liu, D. M. Divakaran, J. Y. Ng, Q. Z. Chan, Y. Lu, Y. Si, F. Zhang, and J. S. Dong, “Phishpedia: A hybrid deep learning based approach to visually identify phishing webpages.” in USENIX Security Symposium, 2021, pp. 3793–3810.
- R. Liu, Y. Lin, X. Yang, S. H. Ng, D. M. Divakaran, and J. S. Dong, “Inferring phishing intention via webpage appearance and dynamics: A deep vision based approach,” in 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21), 2022.
- Sayak Saha Roy (11 papers)
- Poojitha Thota (5 papers)
- Krishna Vamsi Naragam (2 papers)
- Shirin Nilizadeh (32 papers)