It’s a practice that’s used by a number of faculty, across a range of disciplines, and in a variety of forms. Sometimes the lowest exam score is simply dropped. In other cases, there’s a replacement exam and the score on that exam can be used instead of any exam score that’s lower. It’s a strategy used with quizzes probably a bit more often than with exams.
The reasons for doing so include the perennial problem of students missing test events—sometimes for good reasons and other times for reasons not so good—but what teacher wants to adjudicate the reasons? It’s also a practice that responds to test anxiety—some students find testing situations so stressful they aren’t able to demonstrate their understanding of the material.
There are arguments against the strategy. Some contend that it’s one of those practices that contributes to grade inflation. Others worry that knowing the lowest score will be dropped motivates strategic test taking: students don’t study for a test or quiz, deciding beforehand that it will be their lowest score and they will drop or replace it subsequently. This means there’s a chunk of material that they have learned less well. Another downside of a replacement or makeup exam is the extra time and energy involved in writing and grading it.
But what do we know about the practice empirically? Very little, it turns out. Raymond MacDermott, who has done several empirical explorations of the practice, found little related work in his review of the literature. In this study he looked at the effects of three alternatives on cumulative final exam scores in a microeconomics course: three exams, each worth 20 percent of the grade, and a cumulative final worth 40 percent of the grade; three exams, with the lowest score dropped (making each exam worth 30 percent of the grade) and the cumulative final counting for 40 percent; and three exams, each worth 20 percent, with a cumulative replacement exam, offered at the end of the course, which could be used to replace the lowest score on one of the three exams, plus the regular cumulative final worth 40 percent of the grade. The study took place across multiple sections of the course, all taught by MacDermott, with the content, instructional methods, and test types remaining the same.
The results are interesting. Dropping the lowest test score did not compromise performance on the cumulative final. “Contrary to previous research and conventional wisdom . . . allowing students to drop their lowest grade improved performance on a cumulative final exam, while offering a replacement test had no significant effects.” (p. 364) These findings resulted even though there was some evidence of strategic test taking. Some students who performed well on two of the tests had a significantly lower score on the third one. MacDermott believes that did not impact the final exam score because the final was cumulative. Students knew they would be tested on all the course content. The replacement exam, also cumulative, was formatted differently than other tests in the course (multiple-choice questions versus short answers and problems), and MacDermott wonders whether that may have hindered student performance.
Because research on these practices is limited and they are used in many different iterations, generalizations from this work to other fields, content, and students are not in order. The analysis MacDermott employed is straightforward and could be replicated. Faculty who are using (or considering using) one of these approaches should verify its effectiveness with evidence. As
MacDermott points out, “The true question with each [approach] should regard the impact on student learning. While the student may appreciate the opportunity to drop their lowest grade, does it lead to behavior that may detract from learning?” (p. 365) In this study it did not, but is that true across the board? Is it true given other iterations of the approach? Does its impact depend on the content, test question format, or course level? These questions and others merit our attention.
Reference: MacDermott, R. J. (2013). The impact of assessment policy on learning: Replacement exams or grade dropping. Journal of Economic Education, 44 (4), 364-371.