Article Preview
TopIntroduction
Computer-assisted pronunciation training (CAPT) programs are a readily available option for language learners and instructors in the 21st century, and they are promoted by program developers because of the great potential for facilitating foreign language learning. CAPT programs are putatively advantageous as they provide the opportunity for independent study of L2 with corrective feedback from an automated system (Neri, Cucchiarini, Strik & Boves, 2002; Neri & Cucchiarini Strik, 2006). It is expected that by using such automated programs for pronunciation practice, learning time can become a qualitatively improved experience due to mitigation of negative affect (Neri, et al., 2002) such as loss of face (Chiu, Liou & Yeh, 2007) which could occur in live classrooms.
As the language learning environment continues to gravitate toward technological innovation, the imperative to study the efficacy of the latest technologies is apparent (Hubbard, 2006; Macaro, Handley & Walter, 2012), yet there remains a dearth of studies which have evaluated CALL materials (Chapelle, 2010; Neilson, 2011). Material evaluation studies must keep pace with CALL development to determine if the claims and promises of CALL programs are substantive or merely marketing rhetoric. A leading CALL platform, originally designed as a CAPT program which has gained much recognition among EFL teachers, and which has yet to receive systematic scrutiny by researchers, is English Central (EC)1. EC provides pedagogically enhanced videos sourced from across the Internet and partner media providers as material for pronunciation practice, as well as listening practice and vocabulary learning.
As of this writing, three studies by three respective research teams have evaluated EC either in comparison to other CAPT programs or as the sole learning platform, yielding four publicly available reports. These studies attempt to evaluate the effectiveness of EC in yielding English learning gains, as well as to probe levels of satisfaction, perceived effectiveness, and attitudes among the users. Doubtless, effectiveness of the program is of paramount concern for EFL educators and EC designers. Yet, scrutiny must be directed towards user characteristics in order to probe individual differences (ID) variables underlying effectiveness (Hubbard, 2006). Of particular concern is the amount and consistency of usage by learners, which is determined at the nexus of program interface and ID variables over the long run. Indeed a number of studies have posited that learning styles may be a crucial factor in CALL platform efficacy (Grasha & Yanbarger-Hicks, 2000; Valenta, Therriault, Dieter, & Mrtek, 2001), as there is evidence of latent trait correlation both with the degree of engagement with learning platforms and the user perceptions of the programs (Küçük, Genç-Kumtepe, & Tasci, 2010).
Therefore, the present study departs from the paradigm of variable analytic studies of effectiveness and turns towards exploration of user characteristics to identify variables of interest captured by tracking systems, which can then be useful for future material evaluation studies of EC and other language learning platforms. In contrast to research of learning styles which rely on self-report questionnaires, the present study utilizes objective, observable data obtained from the CAPT platform’s activity logs to derive cluster-based types. It is expected that a time series analysis of learners’ usage patterns by user type will yield insight that proves informative when designing program efficacy studies.