DBGCN: A Knowledge Tracing Model Based on Dynamic Breadth Graph Convolutional Networks

DBGCN: A Knowledge Tracing Model Based on Dynamic Breadth Graph Convolutional Networks

Ping Hu, Zhaofeng Li, Pei Zhang, Jimei Gao, Liwei Zhang
DOI: 10.4018/IJWLTT.342848
Article PDF Download
Open access articles are freely available for download

Abstract

Given the extensive use of online learning in educational settings, Knowledge Tracing (KT) is becoming increasingly essential. KT primarily aims to predict a student's future knowledge acquisition based on their past learning activities, thus enhancing the efficiency of student learning. However, the effective acquisition of dynamic and evolving student representations from their historical records presents a formidable challenge. This paper introduces a Knowledge Tracing methodology predicated on Dynamic Broadth Graph Convolutional Networks (DBGCN). DBGCN leverages the mechanisms of breadth graph convolutional networks to proficiently acquire representations of questions and knowledge points from dynamically constructed topological graphs. It employs student state information as an attention query vector to augment student representations, thereby partially mitigating the challenge of capturing the dynamic shifts in user states. The effectiveness of our proposed DBGCN method has been demonstrated through extensive experimentation.
Article Preview
Top

Introduction

Due to the rise of online education platforms, the trend towards intelligence in various online educational platforms, including massive open online courses (MOOCs), is becoming increasingly evident. Providing appropriate guidance based on learners’ individual characteristics, such as strengths and weaknesses, can also help learners understand their learning progress. Knowledge tracking (Cui et al., 2022) aims to predict future knowledge acquisition of students based on their learning history, thereby enhancing learning efficiency. The research in this paper covers the issue of knowledge tracking in the field of education. Specifically, we focus on how to utilize students’ historical learning data to predict their future learning needs and performance, and how to effectively track students’ knowledge acquisition based on their learning behavior patterns and historical data to achieve personalized learning guidance. Knowledge tracking (KT) aims to accurately track how learners’ understanding of concepts evolves over time, reflecting their past performance in exercises. This process forms the basis for subsequent tasks such as automated assessment of student abilities, rational planning of learning strategies, and accurate recommendation of exam resources.

Traditional KT methods, including Bayesian Knowledge Tracing (BKT) as proposed by Corbett and Anderson (2005), rely on a binary variable system to define each student’s knowledge state. In this system, each variable indicates whether a student masters or does not master a specific knowledge point. Subsequently, it utilizes a Markov model derived from classical probability theory to gauge the student’s level of knowledge mastery. Käser et al. (2017) propose a personalized BKT model approach that takes into account differences among students in two categories of model parameters. Nonetheless, the BKT model assumes that each question pertains to a single skill, and it treats different skills as independent entities. Consequently, these models are ill-suited for addressing problems that encompass multiple skills and are unable to capture the interconnections between distinct skill sets.

Over the past few years, motivated by the advancements in deep learning (Song et al., 2022), most recent research in knowledge tracing focuses on applying deep learning techniques. Piech et al. (2015) introduce DKT, which is an attempt to use Recurrent Neural Networks (RNN) —outlined by Sherstinsky (2020)— to model a student’s practice history for predicting their performance. To track the complex nature of student learning, some studies have extended DKT by enhancing external memory structures, including through a Key-Value Memory Network (KVMN) (Miller et al., 2016). The latent variables used in this approach have stronger representation capabilities. Nonetheless, the static nature of their key-value matrices poses a challenge in the efficient monitoring of students’ knowledge states.

Under the influence of the Transformer method by Cui et al. (2023), several research efforts have aimed to integrate graph attention mechanisms into KT. The fundamental concept revolves around acquiring the capacity to learn attention weights for questions within a student’s learning history sequence. This addresses a constraint observed in the DKT model, which treats all questions with equal importance within a series of interactions. Ghosh et al. (2020) introduce a scaled dot-product attention networks in the KT model, learning student states from multiple subspaces (Ma et al., 2023). However, in knowledge tracing tasks, various relationship structures often exist, such as complex relationships between exercises and skills, as well as relationships between exercises. To capture these associations proficiently, the contemporary approach involves delving into graph network learning methodologies, such as Graph Neural Networks (GNNs) (Wan et al., 2023). Yang et al. (2021) proposes the Graph-based Interaction Knowledge Tracing (GIKT) method, which constructs a graph of question-skill relationships and uses GNNs to learn inter-graph relationships, allowing for more effective capture of relationships between sequences.

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 2 Issues (2023)
Volume 17: 8 Issues (2022)
Volume 16: 6 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing