Investigating School Mathematics Performance and Affect: A Critique of Research Methods and Instruments

Investigating School Mathematics Performance and Affect: A Critique of Research Methods and Instruments

Gilah C. Leder
Copyright: © 2017 |Pages: 16
DOI: 10.4018/978-1-5225-1738-2.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter research relating to school mathematics is used as an instance to critique commonly used methods and instruments employed in educational research to determine performance and single or multiple aspects of affect. Advances in technology that have enabled the adaption of previously used instruments are described. Self-report measures, administered face-to-face and on-line, and real-time and virtual observational methods are discussed in some detail. Illustrative data from specific studies are provided. Interpreting the different measurement outcomes is, it is argued, far from unproblematic. This discussion raises issues relevant to research in a range of paradigms but is particularly pertinent to educational research conducted in the neo-positivist paradigm.
Chapter Preview
Top

Introduction: Setting The Context

Researchers in the social sciences operating in the neo-positivist paradigm work from an ontological and axiological understanding of discoverable reality relating to an aspect of human behavior or at least of the possibility of discovering, confirming or contesting particular patterns and consistencies in behavior. To do so requires data collection methods and instruments, along with forms of data analysis, that are capable of revealing such patterns and consistencies. When it comes to researching aspects of education, as the discussion of research methods and instruments in this chapter demonstrates, observation and measurement of patterns and consistencies can be problematic, even when the subject area that pertains to the investigation is mathematics.

Mathematics is generally recognized as a critical component of the school curriculum. Its dominant place was freshly reaffirmed by Donnelly and Wilstshire (2014) in their influential Review of the Australian curriculum. Internationally, endorsement was further provided by the Organisation for Economic Co-operation and Development [OECD], which emphasized: “Being able to read, understand and respond appropriately to numerical and mathematical information are skills that are essential for full social and economic participation” (OECD, 2013, p. 98).

Data from large scale international comparative surveys such as the Programme for International Student Assessment [PISA] and the Trends in International Mathematics and Science Study [TIMSS] are popularly used as valid indicators of student progress. They offer a gross measure of group performance but do not provide a carefully nuanced marker of an individual’s level of attainment. For the latter, different instruments are needed.

A test’s most important characteristic, according to Nichols and Berliner (2007), is its validity—a multi-dimensional construct. The validity of a test is described most comprehensively, they argued, in terms of four measures, the 4Cs. These are content validity: that is, whether the test measures what it is intended to measure; construct validity: whether the test actually measures the concept or attributes it is supposed to measure; criterion validity: whether the test predicts certain kinds of current or future achievement; and consequential validity: the consequences and decisions that are associated with test scores. There is general agreement in the test measurement literature about the importance and relevance of the first three measures in particular but Popham’s (1997, p. 13) admonition that the “social consequences of test use should be addressed by test developers and test users” is still not universally adopted. Some possibilities for pursuing this in the case of mathematics education are explored in this chapter.

To facilitate a functional discussion of students’ academic progress both within and across countries, reference is often made to the three components of the curriculum promulgated by the International Association for the Evaluation of Educational Achievement [IEA] (see e.g., Mullis, Martin, & Foy, 2008). These comprise the intended curriculum (the curriculum mandated or favored in a particular country or setting), the implemented curriculum (the curriculum actually taught), and the attained curriculum (the outcomes of schooling—what students appear to have learnt). Yet to what mathematics students are actually exposed can be tainted, and potentially expanded or constrained, at each of the model’s three levels. External influences, local expertise, and individual preferences can mold or change what students essentially experience. As is fiercely argued by Berliner (2011), all too often overlooked are the inevitable limitations of results reported from large scale examinations because of, for example, contextual data that were not reported, participants’ social class and associated advantages or disadvantages, or aspects of the curriculum considered beyond the scope of the test including material that cannot be assessed through a paper-and-pencil instrument. Thus what is given as the student’s achievement score in mathematics is realistically influenced, at least in part, by previous exposure to the content on which they are tested and how well that content aligns with material actually covered. This constraint applies not just to large scale tests but also to smaller, locally designed, and supposedly strategically targeted instruments.

Complete Chapter List

Search this Book:
Reset