Grading and Feedback

Ungraded Quizzes: Any Chance they Promote Learning?

Faculty rely on quizzes for a couple of reasons. They motivate most students to keep up with their class work and, if they’re unannounced, they motivate most students to show up regularly for class. The research on testing offers another reason, something called

Read More »
Feedback and grading

New Thinking About Feedback

Current thinking about the role of feedback in learning is changing. Several important articles that we’ve highlighted in previous issues have proposed less focus on teacher-provided feedback and more consideration of the role that can be played by peer- and self-assessment activities. As noted in

Read More »

A Quiz That Promotes Discussion and Active Learning in Large Classes

Educational research is full of studies that show today’s students learn more in an active-learning environment than in a traditional lecture. And as more teachers move toward introductory classes that feature active-learning environments, test performance is improving, as is interest in these classes. The challenge

Read More »
Responding to Feedback

A Cover Letter Responding to Feedback

“The idea behind feedback is that it should make the revision process more strategic and ultimately improve the final paper.” (p. 64) However, as many faculty who have provided feedback on students’ written work have discovered, that objective isn’t accomplished as often as it should

Read More »

Using Quizzes to Improve Students’ Learning

In an instructional experiment, I split students into three groups––no quiz, announced quiz, and pop quiz. I used the same instructional style and teaching materials (including the same textbook and handouts) with each of these three groups. I also gave the same two midterms and

Read More »

Multiple Choice Exams: An Alternative Structure

Unfortunately, various analyses of multiple-choice test questions have revealed that many of them do not test higher-order thinking abilities. Questions that test higher-order thinking abilities are difficult and time-consuming to write. But for many teachers, those teaching multiple courses and those teaching large sections, multiple-choice

Read More »

Why I Believe in Extra Credit

As a high school and college history teacher for 35 years, I have come to value extra credit as an effective tool in my “teaching resource kit.” Here’s why, explained by how I use it.

Read More »

Grades and Student Motivation

Do grades motivate students? The answer is yes, but it’s not an unqualified yes. Below are highlights from a couple of first-rate studies that illustrate those qualifications, and they aren’t the only studies to do so.

Read More »
Archives

Get the Latest Updates

Subscribe To Our Weekly Newsletter

Magna Digital Library
It's a practice that's used by a number of faculty, across a range of disciplines, and in a variety of forms. Sometimes the lowest exam score is simply dropped. In other cases, there's a replacement exam and the score on that exam can be used instead of any exam score that's lower. It's a strategy used with quizzes probably a bit more often than with exams. The reasons for doing so include the perennial problem of students missing test events—sometimes for good reasons and other times for reasons not so good—but what teacher wants to adjudicate the reasons? It's also a practice that responds to test anxiety—some students find testing situations so stressful they aren't able to demonstrate their understanding of the material. There are arguments against the strategy. Some contend that it's one of those practices that contributes to grade inflation. Others worry that knowing the lowest score will be dropped motivates strategic test taking: students don't study for a test or quiz, deciding beforehand that it will be their lowest score and they will drop or replace it subsequently. This means there's a chunk of material that they have learned less well. Another downside of a replacement or makeup exam is the extra time and energy involved in writing and grading it. But what do we know about the practice empirically? Very little, it turns out. Raymond MacDermott, who has done several empirical explorations of the practice, found little related work in his review of the literature. In this study he looked at the effects of three alternatives on cumulative final exam scores in a microeconomics course: three exams, each worth 20 percent of the grade, and a cumulative final worth 40 percent of the grade; three exams, with the lowest score dropped (making each exam worth 30 percent of the grade) and the cumulative final counting for 40 percent; and three exams, each worth 20 percent, with a cumulative replacement exam, offered at the end of the course, which could be used to replace the lowest score on one of the three exams, plus the regular cumulative final worth 40 percent of the grade. The study took place across multiple sections of the course, all taught by MacDermott, with the content, instructional methods, and test types remaining the same. The results are interesting. Dropping the lowest test score did not compromise performance on the cumulative final. “Contrary to previous research and conventional wisdom . . . allowing students to drop their lowest grade improved performance on a cumulative final exam, while offering a replacement test had no significant effects.” (p. 364) These findings resulted even though there was some evidence of strategic test taking. Some students who performed well on two of the tests had a significantly lower score on the third one. MacDermott believes that did not impact the final exam score because the final was cumulative. Students knew they would be tested on all the course content. The replacement exam, also cumulative, was formatted differently than other tests in the course (multiple-choice questions versus short answers and problems), and MacDermott wonders whether that may have hindered student performance. Because research on these practices is limited and they are used in many different iterations, generalizations from this work to other fields, content, and students are not in order. The analysis MacDermott employed is straightforward and could be replicated. Faculty who are using (or considering using) one of these approaches should verify its effectiveness with evidence. As MacDermott points out, “The true question with each [approach] should regard the impact on student learning. While the student may appreciate the opportunity to drop their lowest grade, does it lead to behavior that may detract from learning?” (p. 365) In this study it did not, but is that true across the board? Is it true given other iterations of the approach? Does its impact depend on the content, test question format, or course level? These questions and others merit our attention.  Reference: MacDermott, R. J. (2013). The impact of assessment policy on learning: Replacement exams or grade dropping. Journal of Economic Education, 44 (4), 364-371.