An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

Minoru Nakayama, Filippo Sciarrone, Marco Temperini, Masaki Uto
Copyright: © 2022 |Pages: 19
DOI: 10.4018/IJDET.313639
Article PDF Download
Open access articles are freely available for download

Abstract

Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer assessment (PA) provides a powerful pedagogical strategy to support educational activities and foster learners' success, also where a huge number of learners is involved. Item response theory (IRT) can model students' features, such as the skill to accomplish a task, and the capability to mark tasks. In this paper the authors investigate the applicability of IRT models to PA, in the learning environments of MOOCs. The main goal is to evaluate the relationships between some students' IRT parameters (ability, strictness) and some PA parameters (number of graders per task, and rating scale). The authors use a data-set simulating a large class (1,000 peers), built by a Gaussian distribution of the students' skill, to accomplish a task. The IRT analysis of the PA data allow to say that the best estimate for peers' ability is when 15 raters per task are used, with a [1,10] rating scale.
Article Preview
Top

Introduction

E-learning technologies have been evolving and expanding at high rates, so Massive Open Online Courses (MOOCs) and Open Educational Resources (OERs) are being rapidly integrated into educational processes by organizations and institutions around the world (West-Pavlov, 2018). The Internet, and in general, digital access to information, has been recognized as the main tool supporting development (Gillwald et al., 2019): that is why several efforts are made, also by international agencies, to promote the availability of network connections in developing countries (Siles, 2020). Suppose the availability and spread of numerous education/training opportunities could help surmount the abovementioned barrier. In that case, Technology Enhanced Learning (TEL), and Networked Learning in particular, reveal themselves as significant for development beyond their intrinsic educational and pedagogical advantages. Networked education is gaining further confirmation in the current turn of time, while an infectious disease has been spreading and keeps reappearing, forcing long-lasting modifications in the protocols of the worldwide educational systems. Students, and people in general, have to minimize contacts (in presence), and of course, teaching and learning activities have to go on. MOOCs and Open Education are effective and flexible resources to educate, train, and empower populations previously denied actual access to education or online education. MOOCs have been developing for several years to provide learning content for a huge worldwide audience (de Freitas et al., 2015). However, it is still quite frequent that one can enroll in a MOOC and attend the lectures free of charge. On the other hand, easy and inexpensive enrollment is one of the factors significantly producing MOOC dropout rates, together with difficulties in maintaining motivation and engagement.

With respect to motivation and engagement, MOOCs have the same problems as other typologies of distance learning, just made more severe by the extended number of learners. A strong didactic strategy involving the extensive use of assessment, particularly formative assessment (Bloom et al., 1971), can be (part of) a solution to such problems. In contrast, the assessment option for MOOCs is known to be limited (Admiraal et al., 2015).

In particular, peer assessment (PA) is available as a powerful strategy to support educational activities and foster learners’ success (also when a huge number of learners is implied) (Alcarria et al., 2018). By PA, learners can be exposed to different cognitive experiences: on the one hand, they are requested to perform a task (e.g., answering an open-ended question); on the other hand, they are requested to assess other learners’ works, being then involved in cognitive activities of a higher level than just answering (Bloom, 1956).

Moreover, a significant aspect of PA is that it can be delivered at a distance. PA has been already introduced in MOOCs as a learning strategy (Sun et al., 2015). However, the reliability of the assessments, and in general, the applicability of this strategy, are still under discussion by scholars (Alcarria et al., 2018). In practice, there is a question of reliability about the final grade computed by the PA framework if it is based exclusively on peers’ marking work. Furthermore, it concerns the reliability of students’ grading ability, which is clearly, if not completely, dependent on peers’ proficiency in the subject matter of the task to grade. So, studying new PA models suitable to be applied to a MOOC and enhancing the MOOC learning setting is a worthwhile research activity. In particular, one of the directions is how to have a PA system able first to manage, in a computationally feasible way, the big amount of data coming from a PA session in a MOOC. Second, to use such data to maintain reliable student models to use them for automated grading of the tasks/artifacts produced by the learners. Having the above in mind and aiming to study the best configuration of a PA system operating in a MOOC, in this paper, we consider the integration in the PA model of the well-established formal methods described in the item response theory (IRT).

Complete Article List

Search this Journal:
Reset
Volume 22: 1 Issue (2024)
Volume 21: 2 Issues (2023)
Volume 20: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 19: 4 Issues (2021)
Volume 18: 4 Issues (2020)
Volume 17: 4 Issues (2019)
Volume 16: 4 Issues (2018)
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing