Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

qDKT: Question-centric Deep Knowledge Tracing (2005.12442v1)

Published 25 May 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills. A practical limitation in most existing KT models is that all questions nested under a particular skill are treated as equivalent observations of a learner's ability, which is an inaccurate assumption in real-world educational scenarios. To overcome this limitation we introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time. First, qDKT incorporates graph Laplacian regularization to smooth predictions under each skill, which is particularly useful when the number of questions in the dataset is big. Second, qDKT uses an initialization scheme inspired by the fastText algorithm, which has found success in a variety of LLMing tasks. Our experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes. Because of this, qDKT can serve as a simple, yet tough-to-beat, baseline for new question-centric KT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shashank Sonkar (21 papers)
  2. Andrew E. Waters (7 papers)
  3. Andrew S. Lan (21 papers)
  4. Phillip J. Grimaldi (1 paper)
  5. Richard G. Baraniuk (141 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.