Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns (2402.09728v1)

Published 15 Feb 2024 in cs.CR and cs.AI
AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns

Abstract: SMS phishing, also known as "smishing", is a growing threat that tricks users into disclosing private information or clicking into URLs with malicious content through fraudulent mobile text messages. In recent past, we have also observed a rapid advancement of conversational generative AI chatbot services (e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained LLMs. These AI chatbots certainly have a lot of utilities but it is not systematically understood how they can play a role in creating threats and attacks. In this paper, we propose AbuseGPT method to show how the existing generative AI-based chatbot services can be exploited by attackers in real world to create smishing texts and eventually lead to craftier smishing campaigns. To the best of our knowledge, there is no pre-existing work that evidently shows the impacts of these generative text-based models on creating SMS phishing. Thus, we believe this study is the first of its kind to shed light on this emerging cybersecurity threat. We have found strong empirical evidences to show that attackers can exploit ethical standards in the existing generative AI-based chatbot services by crafting prompt injection attacks to create newer smishing campaigns. We also discuss some future research directions and guidelines to protect the abuse of generative AI-based services and safeguard users from smishing attacks.

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns

The paper entitled "AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns" addresses the emerging challenge of leveraging conversational generative AI services to facilitate SMS phishing or smishing attacks. The paper primarily investigates the potential misuse of LLM based chatbots like OpenAI's ChatGPT to generate malicious text messages capable of deceiving users into disclosing sensitive personal information or visiting harmful URLs. This research is pioneering in its systematic approach to understanding the possible threats posed by generative AI in crafting and executing smishing campaigns, an area that has been notably underexplored.

The investigation methodically uncovers how attackers can exploit generative AI chatbots by employing prompt injection techniques to bypass implemented ethical standards, enabling the generation of plausible smishing content. The researchers demonstrate the effectiveness of various publicly available jailbreak prompts, such as the "AIM" prompt, to reduce chatbot compliance with ethical guidelines. This work underscores the role of AI chatbots in crafting realistic smishing messages that could impersonate legitimate institutions, manipulate users into sharing confidential information, and escalate the sophistication of phishing attempts.

In their methodology, the authors analyzed the steps that attackers could take to exploit current popular AI chatbot services. These steps include designing effective prompts to induce the chatbots to produce smishing text and identifying tools for executing comprehensive smishing strategies. A significant finding highlighted in the research is the capability of generative AI to suggest themes and methodologies for smishing that are both common and contextually innovative, further enhancing the potential effectiveness of such campaigns.

From a practical standpoint, the implications of this paper are substantial. The findings point to the urgent need for stronger ethical frameworks and security measures in designing and deploying AI chatbot systems. This involves reinforcing the ethical standards and defenses against prompt injection attacks to deter potential misuse by bad actors. Theoretical implications include a call for future research into developing more advanced AI-driven detection mechanisms that can anticipate and counteract such malicious activities proactively.

Although the paper successfully unveils the vulnerabilities of chatbots against exploitable prompts, it also acknowledges certain limitations. The effectiveness of these attacks may vary with time as chatbot developers refine their systems' defenses. Additionally, the researchers did not engage in registering fake domains or conducting user studies to empirically measure the smishing message success rate, leaving these as potential avenues for future research.

In conclusion, the paper elucidates a concerning application of generative AI systems and argues for both technological enhancement and vigilant monitoring by AI developers, mobile operators, and the broader cybersecurity community. Addressing these vulnerabilities is crucial for preventing the misuse of generative AI in smishing campaigns and safeguarding user data integrity in an increasingly interconnected and AI-driven world. As generative AI continues to evolve, so too must the strategies to mitigate its misuse, ensuring these powerful technologies are secured against exploitation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ashfak Md Shibli (2 papers)
  2. Mir Mehedi A. Pritom (4 papers)
  3. Maanak Gupta (36 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com