A Framework for Large-Scale Automatic Fluency Assessment

A Framework for Large-Scale Automatic Fluency Assessment

Warley Almeida Silva, Luiz Carlos Carchedi, Jorão Gomes Junior, João Victor de Souza, Eduardo Barrere, Jairo Francisco de Souza
Copyright: © 2021 |Pages: 19
DOI: 10.4018/IJDET.2021070105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Learning assessments are important to monitor the progress of students throughout the teaching process. In the digital era, many local and large-scale learning assessments are conducted through technological tools. In this view, a large-scale learning assessment can be designed to tackle one or multiple parts of the teaching process. Oral reading fluency assessments evaluate the ability to read reference texts. However, even though the use of applications to collect the reading of the students avoids logistics costs and speeds up the process, the evaluation of recordings has become a challenging task. Therefore, this work presents a computational solution for large-scale precision-critical fluency assessment. The goal is to build an approach based on automatic speech recognition (ASR) for the automatic evaluation of the oral reading fluency of children and reduce hiring costs as much as possible.
Article Preview
Top

1. Introduction

Learning assessments are important to monitor the progress of students throughout the teaching process. In addition to providing an individual screening of the students, large-scale learning assessments allow educators to obtain rich information about classes, schools, and regions across the country. Decision-makers (e.g., heads of governmental institutions or presidents of private schools) are able to detect problems in an early stage, set consistent learning goals, and draw plans for improving learning in a wider scale through the reports of large-scale assessments (Suskie, 2018).

In the digital era, many local and large-scale learning assessments are conducted through technological tools. Some examples are language proficiency exams (e.g., TOEFL iBT1) and university entrance exams (e.g., GRE2). These assessments are often able to achieve their goals through a combination of multiple-choice questions, which are well-structured and easy for computers to understand, and open-response questions, which are filled up freely by the student. Open-response questions may appear not only as texts but also as audios or videos recorded during the exam. Multiple-choice questions can be quickly graded by an algorithm, whereas open-response questions are commonly evaluated by a human professional. The unstructured nature of open-response questions requires refined approaches to automatically grade the performance of a student and explains why many decision-makers opt to hire human professionals.

There has been considerable development in the literature regarding automatic assessment of unstructured data. Regarding texts, Natural Language Processing (NLP) techniques are able, for example, to automatically grade short answers (Sijimol, 2018) and essays (Rokade, 2018). Similar development has been made for audios through Automatic Speech Recognition (ASR) techniques (e.g., extracting phones pronounced in the recording as well as discovering age and gender of the speakers (Safavi, 2018)). Thus, technology in large-scale assessments has potential to help in two fronts. First, it acts as a facilitator from a logistical perspective by avoiding the use of some materials, the need of transporting cargo from one place to another, and the manual analysis of the exams. Second, it performs the role of a grader by automatically assessing responses through simple and refined techniques, speeding up final reports and considerably decreasing overall costs.

A large-scale learning assessment can be designed to tackle one or multiple parts of the teaching process. However, an important aspect of literacy is often forgotten by learning assessments: the reading fluency, i.e., how fluent students are in reading texts of their own native language. Reading fluently shows that the speaker has a good comprehension of the text (Rasinski, 2017) and it is crucial for every individual to continue the studies, to learn a profession and to ultimately live in society (Dias, 2016). Therefore, educators have become increasingly interested in proposing ways to collect and analyze information about the reading fluency of their students during the stages of reading development within large-scale learning assessments.

Oral reading fluency assessments, more simply referred as fluency assessments, entail additional complexity to the already non-trivial open-response questions from common learning assessments. Establishing a comprehensive methodology to grade reading skills, which should rely on objective points and not subjective interpretation, and handling multiple accents within the same assessment, which increases the need of credible professionals since people may have a bias towards their accent, are examples of challenges related to fluency assessments. In a large-scale setting, those assessments also generate a high number of audios to be evaluated, which implies higher costs than other learning assessments especially when hiring specialized workforce. Nevertheless, assessing the reading fluency is of tremendous significance since it can preemptively help diagnosing local and widespread problems (e.g., the difficulty of an individual in reading fluently or the failure of teaching methodologies in certain schools or regions across the country) and directing future funding for materials or teacher training.

Complete Article List

Search this Journal:
Reset
Volume 22: 1 Issue (2024)
Volume 21: 2 Issues (2023)
Volume 20: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 19: 4 Issues (2021)
Volume 18: 4 Issues (2020)
Volume 17: 4 Issues (2019)
Volume 16: 4 Issues (2018)
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing