Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Knowledge-Component-Based Methodology for Evaluating AI Assistants (2406.05603v1)

Published 9 Jun 2024 in cs.CY and cs.AI

Abstract: We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4, a LLM. This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises. A hint can be requested each time a student fails a test case. Our evaluation addresses three Research Questions: RQ1: Do the hints help students improve their code? RQ2: How effectively do the hints capture problems in student code? RQ3: Are the issues that students resolve the same as the issues addressed in the hints? To address these research questions quantitatively, we identified a set of fine-grained knowledge components and determined which ones apply to each exercise, incorrect solution, and generated hint. Comparing data from two large CS1 offerings, we found that access to the hints helps students to address problems with their code more quickly, that hints are able to consistently capture the most pressing errors in students' code, and that hints that address a few issues at once rather than a single bug are more likely to lead to direct student progress.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Laryn Qi (2 papers)
  2. J. D. Zamfirescu-Pereira (14 papers)
  3. Taehan Kim (8 papers)
  4. Björn Hartmann (9 papers)
  5. John DeNero (13 papers)
  6. Narges Norouzi (16 papers)
Citations (1)