Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models (2406.00628v1)

Published 2 Jun 2024 in cs.CL, cs.CR, cs.CY, and cs.LG

Abstract: LLMs have revolutionized how we interact with machines. However, this technological advancement has been paralleled by the emergence of "Mallas," malicious services operating underground that exploit LLMs for nefarious purposes. Such services create malware, phishing attacks, and deceptive websites, escalating the cyber security threats landscape. This paper delves into the proliferation of Mallas by examining the use of various pre-trained LLMs and their efficiency and vulnerabilities when misused. Building on a dataset from the Common Vulnerabilities and Exposures (CVE) program, it explores fine-tuning methodologies to generate code and explanatory text related to identified vulnerabilities. This research aims to shed light on the operational strategies and exploitation techniques of Mallas, leading to the development of more secure and trustworthy AI applications. The paper concludes by emphasizing the need for further research, enhanced safeguards, and ethical guidelines to mitigate the risks associated with the malicious application of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Garrett Crumrine (1 paper)
  2. Izzat Alsmadi (17 papers)
  3. Jesus Guerrero (5 papers)
  4. Yuvaraj Munian (1 paper)

Summary

We haven't generated a summary for this paper yet.