Evaluating the Effectiveness of Bayesian Knowledge Tracing Model-Based Explainable Recommender

Evaluating the Effectiveness of Bayesian Knowledge Tracing Model-Based Explainable Recommender

Kyosuke Takami, Brendan Flanagan, Yiling Dai, Hiroaki Ogata
Copyright: © 2024 |Pages: 23
DOI: 10.4018/IJDET.337600
Article PDF Download
Open access articles are freely available for download

Abstract

Explainable recommendation, which provides an explanation about why a quiz is recommended, helps to improve transparency, persuasiveness, and trustworthiness. However, little research examined the effectiveness of the explainable recommender, especially on academic performance. To survey its effectiveness, the authors evaluate the math academic performance among middle school students (n=115) by giving pre- and post-test questions based evaluation techniques. During the pre- and post-test periods, students were encouraged to use the Bayesian Knowledge Tracing model based explainable recommendation system. To evaluate how well the students were able to do what they could not do, the authors defined growth rate and found recommended quiz clicked counts had a positive effect on the total number of solved quizzes (R=0.343, P=0.005) and growth rate (R=0.297, P=0.017) despite no correlation between the total number of solved quizzes and growth rate. The results suggest that the use of an explainable recommendation system that learns efficiently will enable students to do what they could not do before.
Article Preview
Top

Introduction

Artificial intelligence (AI) in education has enabled the development of e-learning systems that simulate students’ knowledge and experience to provide personalized support to students (Nwana, 1990; Self, 1974; Wenger, 2014). AI-supported e-learning refers to the use of AI techniques (e.g., fuzzy logic, decision tree, Bayesian networks, neural networks, genetic algorithms, and hidden Markov models) in e-learning (i.e., using computer and network technologies for learning or training) (Colchester et al., 2017). A recent meta-review by Kaudi et al. (2021) reported that that the most identified AI-supported e-learning systems were adaptive learning systems and the second most identified kind of AI-enabled learning systems were intelligent tutoring systems, with the recommendation system being the least reported. Some of the recommender systems for personalized learning adopted collaborative filtering (Chen & Cui, 2020; Wind et al., 2018), content-based filtering (Kandakatla & Bandi, n.d.; Lops et al., 2011), and knowledge-based filtering (Haddad & Naser, 2017; Samin & Azim, 2019). These methods are commonly used in recommendation systems, but it is difficult to describe them as AI-supported learning systems since they do not employ Bayesian networks or neural networks. Therefore AI-supported recommendation systems in the educational field have not been well studied.

On the other hand, when we shift our focus to our daily lives, it is clear that recommender systems are everywhere, and AI is being used here as well. For example, Amazon recommends products with collaborative filtering (Smith & Linden, 2017), and Netflix recommends movies using deep learning (Amatriain & Basilico, 2015). In the e-commerce research field of recommendation, explainable recommendations, which provide explanations about why an item is recommended, have received much attention for improving transparency, persuasiveness, and trustworthiness (Zhang & Chen, 2020). Based on these studies, also in education, it is supposed that explanations from a learning system could provide additional benefits for students. Previous research on intelligent tutoring systems has shown that student motivation in system-based self-regulated learning can be improved by prompting and feedback mechanisms, leading to higher achievement (Duffy & Azevedo, 2015). Further, eXplainable AI (XAI) has begun to attract attention in the field of education for emerging concerns about Fairness, Accountability, Transparency, and Ethics (Khosravi et al., 2022). Explanations interpreting the decision-making process of AI are very important for teachers because they must be accountable to students, parents, or governments. Teachers need to know why such feedback was given by the AI, and interpreting why it was given may help teachers improve their teaching skills.

Complete Article List

Search this Journal:
Reset
Volume 22: 1 Issue (2024)
Volume 21: 2 Issues (2023)
Volume 20: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 19: 4 Issues (2021)
Volume 18: 4 Issues (2020)
Volume 17: 4 Issues (2019)
Volume 16: 4 Issues (2018)
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing