A Review of Faculty Self-Assessment TPACK Instruments (January 2006 – March 2020)

A Review of Faculty Self-Assessment TPACK Instruments (January 2006 – March 2020)

Kristin C. Scott
DOI: 10.4018/IJICTE.2021040108
Article PDF Download
Open access articles are freely available for download

Abstract

Since Mishra and Koehler released their framework of technological pedagogical content knowledge (TPACK), researchers have been attempting to measure it with a variety of self-assessment instruments. Early TPACK instruments struggled with construct validity. More recently, several instruments have been tested for validity and reliability successfully. Since 2006, 233 articles have been published that use a TPACK self-assessment survey of faculty in either a mixed method or empirical study. When faced with this abundance of literature, researchers may be overwhelmed when attempting to find a survey instrument suitable for their own studies. This review is designed to help researchers find valid and reliable instruments for their study by describing frequently used scales, an analysis of respondents from the identified studies, and reliability and validity studies associated with published instruments. A link to the entire data set Technological Pedagogical Content Knowledge (TPACK) Self-Assessment Survey Dataset (2006 – March 2020) is also provided.
Article Preview
Top

Background

Mishra and Koehler’s (2006) TPACK framework extended ideas from Shulman’s (1987) concept of a new knowledge that is created at the intersection of content knowledge (CK) and pedagogical knowledge (PK) combined with a reconception of Pierson’s (2001) notion of TPCK. Unlike Shulman, who only considered “commonplace” technologies (Mishra & Koehler, 2006, p. 1023), they included “digital computers and computer software, artifacts and mechanisms that are new and not yet part of the mainstream” (p. 1023). Mishra and Koehler build on three basic constructs: content knowledge (CK), pedagogical knowledge (PK), and technological knowledge (TK), unlike Pierson (2001). Mishra and Koehler accepted Shulman’s (1987) theory that PCK develops from CK and PK and extended that concept by theorizing that at the intersection of CK and TK, technological content knowledge (TCK) arises; at the intersection of PK and TK, technological pedagogical knowledge develops (TPK); and where TPK, TCK, and PCK converge is where technological pedagogical content knowledge emerges within a larger disciplinary context (see Figure 1).

Figure 1.

TPACK framework (© 2012, tpack.org, Used with permission)

IJICTE.2021040108.f01

Cox and Graham (2009) attempted to describe the TPACK constructs to further define the boundaries of the factors, clarifying what is and is not part of each construct. They provided elaborated definitions for each construct, giving specific examples for each. The redefinition of technology across the technology dimensions to “emerging technologies” (p.63) instead of the “new” technologies suggested by Mishra and Koehler (2006). Cox and Graham did not limit their definition of technology to information and communication technologies (ICT), allowing the definition to change over time and preventing the TPACK framework from becoming obsolete. This conception of technology suggests that measurement instruments will need to evolve as some technologies become common, others die, and more emerge (Cox & Graham, 2009).

Angeli and Valanides (2009) suggested that for TPACK theory to be different from PCK theory (Shulman, 1987), it should concentrate on ICT coupled with TPACK (ICT–TPACK). They proposed a focus on the three base constructs of Mishra and Koehler (2006): CK, PK, and TK, along with two additional constructs: “knowledge of students and knowledge of the context in which the learning takes place” (Angeli & Valanides, 2009, p. 158).

Graham (2011) revisited the boundary issues identified by Cox and Graham (2009). Graham repeated the call for researchers to differentiate between “transparent technologies” and “emerging technologies” (2011; p. 1956). He defined emerging technologies as “new technologies (typically digital technologies) that are being investigated or introduced into a learning environment” (Graham, 2011; p. 1956). He suggested this is one reason some measurement instruments (e.g., Archambault & Barnett, 2010) failed to extract all the expected factors of TPACK in factorial analyses (Graham, 2011).

Yurdakul, Odabasi, Kilicer, Coklar, Birinci, and Kurt (2012) developed a scale to measure TPACK, the central construct of the TPACK framework, through the use of a self-assessment of skill competency using a 5-point Likert scale (e.g., I can easily do/I certainly can’t do it). The TPACK-Deep scale consists of four factors: design, exertion, ethics, and proficiency.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 3 Issues (2022)
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing