Impact of Automated Software Testing Tools on Reflective Thinking and Student Performance in Introductory Computer Science Programming Classes

Impact of Automated Software Testing Tools on Reflective Thinking and Student Performance in Introductory Computer Science Programming Classes

Evorell Fridge, Sikha Bagui
DOI: 10.4018/IJICTE.2016010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The goal of this research was to investigate the effects of automated testing software on levels of student reflection and student performance. This was a self-selecting, between subjects design that examined the performance of students in introductory computer programming classes. Participants were given the option of using the Web-CAT software-testing tool to evaluate their computer code. Student self-reported levels of reflection were measured using reflective thinking survey.
Article Preview
Top

1. Introduction

Learning to write good computer code is a difficult task for introductory programming students. Students can learn what constitutes good programming practice, but actually writing a program requires a different kind of experience. Hence, programming assignments are an integral part of the learning experience in introductory programming courses. Instructors assign programming tasks to students as solo work with strict warnings against collaboration. A student in an introductory programming class may excel at theoretical assignments like tests and quizzes and yet struggle through programming assignments.

There is a tendency for students to use a “Brownian motion” approach to programming that involves small random changes to code in the hopes that some solution will eventually arise (Edwards, 2004; Reek, 1989; Spacco, 2006). Students who take this approach make adjustments to their program without a particular plan, frantically hoping to get their program to do something useful. They continue to hit the compile button in the hopes that whatever they just did will make their code work. Another popular approach that novices take is “Big Bang” coding (Edwards, 2003). This involves writing large amounts of computer code at one sitting without any testing. They are often disappointed to learn that their program does not work as expected, and it is then difficult for them to pinpoint their problems. One-on-one interaction between a student and teacher certainly has the potential to help a student overcome problems, but computer science educators rarely have the time to sit with each student individually. Edwards (2003) and Spacco (2006) proposed the use of automated software-testing systems to provide feedback.

Automated software-testing systems provide a certain measure of feedback to students by reporting the results of a series of test cases immediately. They also provide a benefit to instructors by automating the monotonous task of testing for validity so that the instructors may instead focus on grading for quality. Several institutions have used versions of this sort of tool with some measure of success (Douce, Livingstone, & Orwell, 2005). Automated software-testing tools have been criticized for encouraging students to “focus on output correctness” (Edwards, 2004, p. 28) at the expense of proper design and testing. Additional challenges to widespread adoption of these graders are the need to design programming assignments to work with automated graders and the added overhead and expertise needed to run these systems (Spacco, 2006).

Research into the effectiveness of test-driven development (TDD) has shown that it contributes very little on its own to improvements in programmer productivity or the quality of the software (Kollanus, 2010). The use of TDD was, however, associated with an increased level of time and thought spent on the development and testing of software (Huang & Holcombe, 2008; Marrero & Settle, 2005). Edwards (2004) associated the use of student test cases in automated software testing with fostering an environment of on the spot experimentation that is so closely associated with reflection-in-action. However, Edwards (2004) did not attempt to measure whether or not an increase in reflection was actually occurring, nor did he attempt to link this measurement to student performance.

The purpose of this study was to examine the effectiveness of an automated software-testing environment on the average project grade performance of students in introductory computer science classes. We also looked at what influence reflective thought may have on student performance. This study explores the influence of such tools on student performance. This study also deviates from Edwards’ (2004) design by offering students researcher-provided tests instead of asking students to write their own. The rest of the paper is organized as follows: section two presents the background and related works, in section three we present the experiment, section four presents the results and section five presents the conclusion.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 3 Issues (2022)
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing