Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Short Math Answer Grading via In-context Meta-learning (2205.15219v3)

Published 30 May 2022 in cs.CL and cs.LG

Abstract: Automatic short answer grading is an important research direction in the exploration of how to use AI-based tools to improve education. Current state-of-the-art approaches use neural LLMs to create vectorized representations of students responses, followed by classifiers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained LLMs that are not well-adapted to educational subject domains and/or student-generated text and ii) they almost always train one model per question, ignoring the linkage across a question and result in a significant model storage problem due to the size of advanced LLMs. In this paper, we study the problem of automatic short answer grading for students' responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular LLM BERT adapted to mathematical content, as our base model and fine-tune it for the downstream task of student response grading. Second, we use an in-context learning approach that provides scoring examples as input to the LLM to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often significantly) outperforms existing approaches, especially for new questions that are not seen during training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mengxue Zhang (10 papers)
  2. Sami Baral (3 papers)
  3. Neil Heffernan (9 papers)
  4. Andrew Lan (48 papers)
Citations (21)