Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity (2501.03203v1)

Published 6 Jan 2025 in cs.CL, cs.AI, and cs.CY

Abstract: This study seeks to enhance academic integrity by providing tools to detect AI-generated content in student work using advanced technologies. The findings promote transparency and accountability, helping educators maintain ethical standards and supporting the responsible integration of AI in education. A key contribution of this work is the generation of the CyberHumanAI dataset, which has 1000 observations, 500 of which are written by humans and the other 500 produced by ChatGPT. We evaluate various ML and deep learning (DL) algorithms on the CyberHumanAI dataset comparing human-written and AI-generated content from LLMs (i.e., ChatGPT). Results demonstrate that traditional ML algorithms, specifically XGBoost and Random Forest, achieve high performance (83% and 81% accuracies respectively). Results also show that classifying shorter content seems to be more challenging than classifying longer content. Further, using Explainable Artificial Intelligence (XAI) we identify discriminative features influencing the ML model's predictions, where human-written content tends to use a practical language (e.g., use and allow). Meanwhile AI-generated text is characterized by more abstract and formal terms (e.g., realm and employ). Finally, a comparative analysis with GPTZero show that our narrowly focused, simple, and fine-tuned model can outperform generalized systems like GPTZero. The proposed model achieved approximately 77.5% accuracy compared to GPTZero's 48.5% accuracy when tasked to classify Pure AI, Pure Human, and mixed class. GPTZero showed a tendency to classify challenging and small-content cases as either mixed or unrecognized while our proposed model showed a more balanced performance across the three classes.

Detection of AI-Generated Text in Educational Contexts through Machine Learning and Explainable AI

The paper "Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity" addresses a pertinent issue of distinguishing AI-generated text, specifically from LLMs like ChatGPT, from human-authored content and its implications on educational integrity. This paper targets the development of effective detection methodologies to maintain ethical standards in academic environments, a significant concern given the rising prevalence of AI-generated content in student submissions.

Methodology Overview

The authors introduce a novel dataset, CyberHumanAI, comprising 1,000 cybersecurity paragraphs evenly split between human-written content sourced from Wikipedia and ChatGPT-generated text. The data underwent preprocessing for standardization, involving stop-word removal, lemmatization, and vectorization using TF-IDF to convert text into numerical features suitable for model development. The dataset captures the linguistic tendencies associated with both classes, providing a basis for training detection algorithms.

The paper evaluates various classification methodologies including traditional machine learning models (Random Forest, Support Vector Machines, J48, XGBoost) and deep learning frameworks (CNN and DNN). The paper notably observes superior performance from XGBoost and Random Forest in identifying AI-generated content from shorter text inputs, achieving classification accuracies of 83% and 81% respectively. Deep learning models, although suitable for raw pattern recognition, did not demonstrate equivalent efficacy, presumably due to the dataset's size and nature.

Explainable AI and Findings

Crucially, the paper incorporates Explainable Artificial Intelligence (XAI), specifically through the utilization of Local Interpretable Model-agnostic Explanations (LIME), to elucidate model decision pathways. This investigative approach identifies discriminative linguistic features — human compositions frequently leverage pragmatic language ("allow," "use") whereas AI-generated text tends to employ more abstract terms ("realm," "establish"). Such insights substantiate the model's predictive rationale, fostering transparency and trust.

Comparative Analysis with GPTZero

Comparatively, the paper benchmarks the developed XGBoost model against GPTZero, a generalized AI detection system. Highlighting an empirical advantage, the proposed model achieves a notable 77.5% accuracy in distinguishing between pure AI, mixed, and pure human text, surpassing GPTZero's 48.5% accuracy. This suggests the efficacy of a narrowly specialized AI model in specific tasks as opposed to a more generalized approach, particularly in educational contexts. GPTZero's performance depicts a conservative bias towards mixed classifications, revealing limitations in correctly identifying clean cases of AI or human text in smaller content forms.

Pedagogical Implications and Future Directions

The implications of these findings are multifaceted. By enabling accurate detection of AI-generated text, the research aids in preserving academic integrity by aiding educators in identifying potential misuse of technology in student work. Through interpretability provided by XAI, educators can develop more informed strategies for evaluating the authenticity of digital submissions.

Additionally, this paper invites further exploration in the fine-tuning and extension of detection models across larger and more diverse datasets. Future research might focus on refining algorithms to accommodate subtler forms of AI-generated content interwoven with human inputs, adapting to evolving generative AI capabilities. Moreover, a continuous assessment of XAI methodologies could yield deeper insights into refining model interpretability and applicability across various academic and professional domains.

In conclusion, the paper advances the utility of machine learning and explainable AI in enhancing educational practices and maintaining integrity in the face of rapidly advancing AI technologies. It demonstrates the promise of targeted, dataset-specific AI systems for practical applications, reinforcing the importance of transparency and applicability in AI research for educational contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ayat A. Najjar (1 paper)
  2. Huthaifa I. Ashqar (49 papers)
  3. Omar A. Darwish (1 paper)
  4. Eman Hammad (2 papers)