Calculating Final Course Grades: What about Dropping Scores or Offering a Replacement?

Instructors commonly cope with a missed test or failed exam (this may also apply to quizzes) by letting students drop their lowest score. Sometimes the lowest score is replaced by an extra exam or quiz. Sometimes the tests are worth different amounts, with the first test worth less, the second worth a bit more, and the third worth more than the first two—but not as much as the final.

There are various advantages and disadvantages to these approaches. Dropping the lowest score means no or fewer make-up exams or quizzes, which is a good thing for the teacher. It also makes it possible for students to do poorly on one assessment and still do well in the course. However, the material on that dropped exam or failed quiz is lost, as the student doesn’t or didn’t have to learn it. The replacement test has the advantage of holding the student responsible for all the content in the course, and replacement tests offered at the end of the course can be excellent preparation for the final. However, this means the teacher has to construct another test. Progressively weighting the value of the tests does give students the opportunity to “learn” how the professor tests. For some students who assume that course content is a breeze, the first exam can serve as a wakeup call, and if it counts less there is still time to do well in the course.

But are we focusing on the question we should be asking about these various alternatives? Raymond J. MacDermott suggests that we aren’t. “The true question with each should regard the impact on student learning.” (p. 365) How do these various alternatives affect what students learn in the course? It’s a straightforward, obvious question, but despite that, it’s not one frequently addressed in discussion of these alternatives, and it’s not one that has been explored much empirically.

In a small study undertaken in three sections of intermediate macroeconomic theory, MacDermott compared three assessment policies in terms of their impact on the cumulative final exam score: 1) three in-class exams each worth 20 percent of the grade; 2) three in-class exams with the lowest exam score dropped and the other two exams each worth 30 percent of the grade; and 3) three in-class exams (each worth 20 percent), plus the option of an end-of-course exam  whose score is permitted to replace the lowest score on the other three exams.

Students in the section that could drop an exam “engaged in some form of strategic test taking.” (p. 366) They under-studied or entirely missed one of the exams. However, this did not affect grades on the final. In fact, “allowing students to drop their lowest exam score actually led to better performance on the cumulative final exam.” (p. 368). The opportunity to take a replacement exam did not improve final exam performance for those students who took this extra exam.

The results of this study are at odds with previous research. Findings from a principles of microeconomics course cited in this study showed that dropping a test grade negatively affected scores on the cumulative final. The value of this work is not so much in the study results, but in the important questions it raises about exam assessment policies. Yes, the convenience of the student and the instructor do matter, but are they as important as the learning objectives of the course? Shouldn’t our assessment policies be those that promote the most learning for students? And shouldn’t the impact of the assessment policies in use be analyzed with collected evidence?

Reference: MacDermott, R.J. (2013). The impact of assessment policy on learning: Replacement exams or grade dropping. Journal of Economic Education, 44 (4), 364-371.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

Instructors commonly cope with a missed test or failed exam (this may also apply to quizzes) by letting students drop their lowest score. Sometimes the lowest score is replaced by an extra exam or quiz. Sometimes the tests are worth different amounts, with the first test worth less, the second worth a bit more, and the third worth more than the first two—but not as much as the final.

There are various advantages and disadvantages to these approaches. Dropping the lowest score means no or fewer make-up exams or quizzes, which is a good thing for the teacher. It also makes it possible for students to do poorly on one assessment and still do well in the course. However, the material on that dropped exam or failed quiz is lost, as the student doesn't or didn't have to learn it. The replacement test has the advantage of holding the student responsible for all the content in the course, and replacement tests offered at the end of the course can be excellent preparation for the final. However, this means the teacher has to construct another test. Progressively weighting the value of the tests does give students the opportunity to “learn” how the professor tests. For some students who assume that course content is a breeze, the first exam can serve as a wakeup call, and if it counts less there is still time to do well in the course.

But are we focusing on the question we should be asking about these various alternatives? Raymond J. MacDermott suggests that we aren't. “The true question with each should regard the impact on student learning.” (p. 365) How do these various alternatives affect what students learn in the course? It's a straightforward, obvious question, but despite that, it's not one frequently addressed in discussion of these alternatives, and it's not one that has been explored much empirically.

In a small study undertaken in three sections of intermediate macroeconomic theory, MacDermott compared three assessment policies in terms of their impact on the cumulative final exam score: 1) three in-class exams each worth 20 percent of the grade; 2) three in-class exams with the lowest exam score dropped and the other two exams each worth 30 percent of the grade; and 3) three in-class exams (each worth 20 percent), plus the option of an end-of-course exam  whose score is permitted to replace the lowest score on the other three exams.

Students in the section that could drop an exam “engaged in some form of strategic test taking.” (p. 366) They under-studied or entirely missed one of the exams. However, this did not affect grades on the final. In fact, “allowing students to drop their lowest exam score actually led to better performance on the cumulative final exam.” (p. 368). The opportunity to take a replacement exam did not improve final exam performance for those students who took this extra exam.

The results of this study are at odds with previous research. Findings from a principles of microeconomics course cited in this study showed that dropping a test grade negatively affected scores on the cumulative final. The value of this work is not so much in the study results, but in the important questions it raises about exam assessment policies. Yes, the convenience of the student and the instructor do matter, but are they as important as the learning objectives of the course? Shouldn't our assessment policies be those that promote the most learning for students? And shouldn't the impact of the assessment policies in use be analyzed with collected evidence?

Reference: MacDermott, R.J. (2013). The impact of assessment policy on learning: Replacement exams or grade dropping. Journal of Economic Education, 44 (4), 364-371.