Multiple Choice Exams: An Alternative Structure

The major objection to multiple-choice exams is that they encourage memorization and discourage thinking. And it’s a valid objection if the questions test lower levels of learning such as recall ability. Unfortunately, various analyses of multiple-choice test questions have revealed that many of them do not test higher-order thinking abilities. Questions that test higher-order thinking abilities are difficult and time-consuming to write. But for many teachers, those teaching multiple courses and those teaching large sections, multiple-choice tests are really the only viable option, or at least that’s what most faculty think. Here’s an intriguing option that still retains the efficiency of machine-scoring but does involve more student thinking and cleverly motivates them to do this additional mental work. Empirical analysis of the option showed it garnering some pretty impressive results as well.

Here’s a nutshell version of how it works. There are lots more details in the article. Students take the multiple-choice exam in class and turn in their answers on a machine-scorable form. They take the test questions home and have until the next class period to correct their answers. They are encouraged to consult the text and their notes and to talk to each other. What motivates their participation in this activity is that they get two points for every question answered correctly on their original answer sheet and on the self-corrected version and they get one point for every wrong answer on the original that has been corrected on the take-home version. “Students have to figure out whether they answered a question correctly or not for the self-revised version.” (p. 335)

“The idea behind self-correcting exams is that the additional interaction with the material fosters deeper learning. Students are challenged to discover the correct answer, to study the material in their way, and to experience some degree of mastery.” (p. 335)

The effects of this particular approach were studied in two sections of a large (about 175 students per section) developmental psychology course. In the control section, students were not given the self-correcting option for any of the three exams or the cumulative final. In the experimental section, students had the option of doing a self-correcting version for exams one and two but not for exam three or the final. All students in the experimental section took the instructor up on the option and completed a revised version of both exams.

When researchers looked at the exam performance of the two groups, they found that “compared to the control sample, students who got the self-correcting option improved more over the course of the semester on the three original exams.” (p. 337) The scores of students at every grade level (A through F) were improved with the self-correcting option. However, low-performing students benefited the most. And students who self-corrected their exams did better on the final than those in the control group. More compelling, according to the authors, were the retention benefits of the approach. The more items students corrected on the exams, the better they performed on the corresponding part of the final. They conclude, “The self-correcting approach may be particularly useful for large classrooms that limit instructors’ options to foster active learning. The self-correcting approach provides a relatively cost-efficient and simple way of implementing an active learning component.” (p. 338)

But is this an approach that fosters cheating? It depends on how cheating is defined. The authors contend that studying the text and discussing answers with peers are actions that engage students in ways that promote learning and should not be considered cheating. Obviously, if students were simply copying each other’s answers, that would be cheating, but the hedge here is that students don’t know whether they’ve answered the questions correctly. The authors write, “We believe that our exams were difficult enough to create doubts that peers actually had the correct answers.” (p. 338) They point out that exams for the self-correcting approach should be on the difficult side, which also resolves the objection that the approach promotes grade inflation. And they note that if students were cheating by copying answers, their better scores on the final exam would be difficult to explain.

Sometimes teachers get stuck in their thinking about what’s possible. An option like this shows that there are alternatives, and even if an instructor doesn’t opt for this approach, it opens the door to other ways of designing multiple-choice exam experiences.

Reference: Gruhn, D., and Cheng, Y. (2014). A self-correcting approach to multiple-choice exams improves students’ learning. Teaching of Psychology, 41 (4), 335-339.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

The major objection to multiple-choice exams is that they encourage memorization and discourage thinking. And it's a valid objection if the questions test lower levels of learning such as recall ability. Unfortunately, various analyses of multiple-choice test questions have revealed that many of them do not test higher-order thinking abilities. Questions that test higher-order thinking abilities are difficult and time-consuming to write. But for many teachers, those teaching multiple courses and those teaching large sections, multiple-choice tests are really the only viable option, or at least that's what most faculty think. Here's an intriguing option that still retains the efficiency of machine-scoring but does involve more student thinking and cleverly motivates them to do this additional mental work. Empirical analysis of the option showed it garnering some pretty impressive results as well.

Here's a nutshell version of how it works. There are lots more details in the article. Students take the multiple-choice exam in class and turn in their answers on a machine-scorable form. They take the test questions home and have until the next class period to correct their answers. They are encouraged to consult the text and their notes and to talk to each other. What motivates their participation in this activity is that they get two points for every question answered correctly on their original answer sheet and on the self-corrected version and they get one point for every wrong answer on the original that has been corrected on the take-home version. “Students have to figure out whether they answered a question correctly or not for the self-revised version.” (p. 335)

“The idea behind self-correcting exams is that the additional interaction with the material fosters deeper learning. Students are challenged to discover the correct answer, to study the material in their way, and to experience some degree of mastery.” (p. 335)

The effects of this particular approach were studied in two sections of a large (about 175 students per section) developmental psychology course. In the control section, students were not given the self-correcting option for any of the three exams or the cumulative final. In the experimental section, students had the option of doing a self-correcting version for exams one and two but not for exam three or the final. All students in the experimental section took the instructor up on the option and completed a revised version of both exams.

When researchers looked at the exam performance of the two groups, they found that “compared to the control sample, students who got the self-correcting option improved more over the course of the semester on the three original exams.” (p. 337) The scores of students at every grade level (A through F) were improved with the self-correcting option. However, low-performing students benefited the most. And students who self-corrected their exams did better on the final than those in the control group. More compelling, according to the authors, were the retention benefits of the approach. The more items students corrected on the exams, the better they performed on the corresponding part of the final. They conclude, “The self-correcting approach may be particularly useful for large classrooms that limit instructors' options to foster active learning. The self-correcting approach provides a relatively cost-efficient and simple way of implementing an active learning component.” (p. 338)

But is this an approach that fosters cheating? It depends on how cheating is defined. The authors contend that studying the text and discussing answers with peers are actions that engage students in ways that promote learning and should not be considered cheating. Obviously, if students were simply copying each other's answers, that would be cheating, but the hedge here is that students don't know whether they've answered the questions correctly. The authors write, “We believe that our exams were difficult enough to create doubts that peers actually had the correct answers.” (p. 338) They point out that exams for the self-correcting approach should be on the difficult side, which also resolves the objection that the approach promotes grade inflation. And they note that if students were cheating by copying answers, their better scores on the final exam would be difficult to explain.

Sometimes teachers get stuck in their thinking about what's possible. An option like this shows that there are alternatives, and even if an instructor doesn't opt for this approach, it opens the door to other ways of designing multiple-choice exam experiences.

Reference: Gruhn, D., and Cheng, Y. (2014). A self-correcting approach to multiple-choice exams improves students' learning. Teaching of Psychology, 41 (4), 335-339.