Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Knowledge Tracing via Pre-training Question Embeddings (2012.05031v1)

Published 9 Dec 2020 in cs.IR and cs.LG

Abstract: Knowledge tracing (KT) defines the task of predicting whether students can correctly answer questions based on their historical response. Although much research has been devoted to exploiting the question information, plentiful advanced information among questions and skills hasn't been well extracted, making it challenging for previous work to perform adequately. In this paper, we demonstrate that large gains on KT can be realized by pre-training embeddings for each question on abundant side information, followed by training deep KT models on the obtained embeddings. To be specific, the side information includes question difficulty and three kinds of relations contained in a bipartite graph between questions and skills. To pre-train the question embeddings, we propose to use product-based neural networks to recover the side information. As a result, adopting the pre-trained embeddings in existing deep KT models significantly outperforms state-of-the-art baselines on three common KT datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yunfei Liu (40 papers)
  2. Yang Yang (884 papers)
  3. Xianyu Chen (14 papers)
  4. Jian Shen (68 papers)
  5. Haifeng Zhang (59 papers)
  6. Yong Yu (219 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.