Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using AI Large Language Models for Grading in Education: A Hands-On Test for Physics (2411.13685v1)

Published 20 Nov 2024 in physics.ed-ph

Abstract: Grading assessments is time-consuming and prone to human bias. Students may experience delays in receiving feedback that may not be tailored to their expectations or needs. Harnessing AI in education can be effective for grading undergraduate physics problems, enhancing the efficiency of undergraduate-level physics learning and teaching, and helping students understand concepts with the help of a constantly available tutor. This report devises a simple empirical procedure to investigate and quantify how well LLM based AI chatbots can grade solutions to undergraduate physics problems in Classical Mechanics, Electromagnetic Theory and Quantum Mechanics, comparing humans against AI grading. The following LLMs were tested: Gemini 1.5 Pro, GPT-4, GPT-4o and Claude 3.5 Sonnet. The results show AI grading is prone to mathematical errors and hallucinations, which render it less effective than human grading, but when given a mark scheme, there is substantial improvement in grading quality, which becomes closer to the level of human performance - promising for future AI implementation. Evidence indicates that the grading ability of LLM is correlated with its problem-solving ability. Through unsupervised clustering, it is shown that Classical Mechanics problems may be graded differently from other topics. The method developed can be applied to investigate AI grading performance in other STEM fields.

Summary

We haven't generated a summary for this paper yet.