Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generative AI Misuse Potential in Cyber Security Education: A Case Study of a UK Degree Program

Published 22 Jan 2025 in cs.CR and cs.CY | (2501.12883v3)

Abstract: Recent advances in generative AI, such as ChatGPT, Google Gemini, and other LLMs, pose significant challenges to upholding academic integrity in higher education. This paper investigates the susceptibility of a Master's-level cyber security degree program at a UK Russell Group university, accredited by a leading national body, to LLM misuse. Through the application and extension of a quantitative assessment framework, we identify a high exposure to misuse, particularly in independent project- and report-based assessments. Contributing factors, including block teaching and a predominantly international cohort, are highlighted as potential amplifiers of these vulnerabilities. To address these challenges, we discuss the adoption of LLM-resistant assessments, detection tools, and the importance of fostering an ethical learning environment. These approaches aim to uphold academic standards while preparing students for the complexities of real-world cyber security.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.