Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating LLM-Generated Q&A Test: a Student-Centered Study (2505.06591v1)

Published 10 May 2025 in cs.CL and cs.HC

Abstract: This research prepares an automatic pipeline for generating reliable question-answer (Q&A) tests using AI chatbots. We automatically generated a GPT-4o-mini-based Q&A test for a Natural Language Processing course and evaluated its psychometric and perceived-quality metrics with students and experts. A mixed-format IRT analysis showed that the generated items exhibit strong discrimination and appropriate difficulty, while student and expert star ratings reflect high overall quality. A uniform DIF check identified two items for review. These findings demonstrate that LLM-generated assessments can match human-authored tests in psychometric performance and user satisfaction, illustrating a scalable approach to AI-assisted assessment development.

Summary

Evaluating LLM-Generated Q&A Tests: Insights and Implications

The paper "Evaluating LLM-Generated Q&A Test: A Student-Centered Study" by Wróblewska et al. explores the efficacy of using AI, particularly LLMs like GPT-4o, in automating the generation of question-answer tests. With an emphasis on psychometrics and the perceptions of students and experts, the paper provides a comprehensive investigation into the potential for AI to supplement traditional educational assessments.

Research Objectives and Methodology

The primary goal of this paper is to streamline the process of creating high-quality assessments with minimal human intervention while maintaining effective student evaluation standards. LLMs have been employed to automatically generate queries, and these are subsequently fed into a testing pipeline. Two hypotheses are tested:

  1. LLM-generated items are comparable in psychometric quality to items crafted by human experts.
  2. The perceived quality of these AI-generated questions is on par with traditional test items, establishing their viability for academic assessments.

The research applies Item Response Theory (IRT) mixed-format analysis to assess the generated items' discrimination and difficulty indices. A cohort of students from an NLP course undergoes testing, and both student and expert feedback on the questions' quality is solicited.

Key Findings

The results indicate that the LLM-generated assessment items indeed possess strong psychometric characteristics. Specifically, mean discrimination is seen at 0.75 while average difficulty measures at -4.31, parameters that align with common metrics for hand-crafted items. Moreover, perceived quality ratings from both students (averaging 3.8 out of 5) and experts (averaging 4.5 out of 5) suggest that these AI-created questions are well-received.

Despite some items showing a lack of variability, leading to their exclusion from the analysis, the majority of questions effectively differentiate among students of various ability levels. The analysis also identifies instances of differential item functioning (DIF), pointing out potential biases that require further investigation. Exam items 1 and 21, in particular, demonstrate significant DIF, suggesting unaccounted biases that might affect fair assessment across different subgroups of students.

Implications and Future Directions

The paper provides valuable insights into how AI tools can enhance educational testing by automating the creation of assessment materials. Such automation could significantly reduce educators' workloads, channeling their efforts towards pedagogical innovation rather than routine test preparation.

However, the paper also highlights areas for improvement, notably the need for large sample sizes to refine the generalizability of findings and adjustments to manage identified DIF, ensuring unbiased evaluations. The authors suggest future avenues for research, including expanding studies to incorporate diverse item formats, investigating active learning techniques for students' involvement in revising AI-generated content, and imposing systematic refinement protocols to mitigate AI-induced ambiguities or errors.

Ethical Considerations

The authors diligently address ethical concerns, particularly focusing on transparency, data privacy, and the potential for algorithmic biases. Steps are taken to anonymize collected data, and ethical guidelines are adhered to throughout the paper process, aligning with UNESCO recommendations.

Conclusions

This paper demonstrates the potential for integrating AI into educational environments, not merely as an auxiliary tool but as a primary driver of assessment innovation. By achieving comparable psychometric robustness and quality ratings to human-created items, LLM systems offer a scalable solution to the challenges of contemporary educational demands. As AI continues to evolve, exploring its role in diverse educational contexts remains a promising endeavor.