FL Instructor Beliefs About Machine Translation: Ecological Insights to Guide Research and Practice

FL Instructor Beliefs About Machine Translation: Ecological Insights to Guide Research and Practice

Emily Hellmich, Kimberly Vinall
DOI: 10.4018/IJCALLT.2021100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Machine translation (MT) platforms have gained increasing attention in the educational linguistics community. The current article extends past research on instructor beliefs about MT by way of an ecological theoretical framework. The study reports on a large-scale survey (n=165) of FL university-level instructors in the U.S. Findings indicate strong lines being drawn around acceptable MT use (e.g., in relation to text length and skill, policies), an acknowledgement of widespread student use driven by diverse motivations, and the Janus-faced nature of MT's potential threat to the profession. These findings reveal several salient tensions in how MT mediates relationships in language education (e.g., constructions of students, the nature of language and language learning, goals of the profession) that shed new light on the impact of MT technologies on the field. Implications for future research and the development of pedagogical practices anchored in digital literacies conclude the piece.
Article Preview
Top

Background

Translating Machine Translation

Machine translation (MT) is defined as using software to automatically translate text from one language to another and has been approached in different ways (Qun & Xiaojun, 2015, p. 105). The oldest approaches, rule-based MT (e.g., Babelfish), required language rules (grammatical, syntactic, etc) to be manually programmed into the software (Jiménez-Crespo, 2017; Qun & Xiaojun, 2015). Statistical machine translation, introduced in the late 1980s, relies on probabilistic statistical models that use algorithms to draw out correspondences between parallel texts (Le & Schuster, 2016; Qun & Xiaojun, 2015; Wu et al., 2016).

Deep learning machine translation, the latest in MT approaches, uses advances in machine learning to draw out patterns in raw data sets: rather than relying on pre-coded input or pre-written rules, deep learning MT software constructs (or learns) rules from the linguistic input itself (Lewis-Kraus, 2016; Poibeau, 2017). The updated product is faster, requires less human programming on the front-end, and can better handle longer texts and rare words (Kelleher, 2019; Lewis-Kraus, 2016; Poibeau, 2017; Wu et al., 2016). For example, Google launched a new version of its Google Translate platform that uses a form of deep learning (neural networks) in 2016 (Le & Schuster, 2016). This new approach produces noticeably better translations in languages with sufficient databases, a change that has been documented by the MT industry (Lewis-Kraus, 2016; Wu et al., 2016) as well as language teaching/learning professionals (Briggs, 2018; Ducar & Schocket, 2018; Stapleton & Kin, 2019).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 5 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing