Making Multiple-Choice Exams Better

The relatively new Scholarship of Teaching and Learning in Psychology journal has a great feature called a “Teacher-Ready Research Review.” The examples I’ve read so far are well organized, clearly written, full of practical implications, and well referenced. This one on multiple-choice (m/c) tests (mostly the questions on those tests) is no exception. Given our strong reliance on this test type, a regular review of common practices in light of research is warranted.

This 12-page review covers every aspect of m/c exams, at least every aspect I could think of. What follows here are bits and pieces culled from the review. For teachers serious about ensuring whether their m/c exams (both those administered in class and online) assess student knowledge in the best possible ways, this article should be kept on file or in the cloud.

Perhaps the most important ongoing concern about m/c tests is their propensity to measure surface knowledge, those facts and details that can be memorized without much (or any) understanding of their meaning or significance. This article documents studies showing that students’ preference for m/c exams derives from their perception that these exams are easier. Moreover, that perception results in students’ using studying strategies associated with superficial learning: flashcards with a term on one side and the definition on the back, reviewing notes by recopying them, and so on. Students also prefer m/c tests because they allow guessing. If there are four answer options and two of them can be ruled out, there’s a 50 percent chance the student will get the answer right. So students get credit for answers they didn’t know, leaving the teacher to wonder how many right answers indicate knowledge and understanding the student does not have.

In one of the article’s best sections, the authors share a number of strategies teachers can use to make m/c questions more about thinking and less about memorizing. They start with the simplest question. If the directions spell out that students should select “the best answer,” “the main reason” or the “most likely” solution, that means some of the answer options can be correct but not as correct as the right answer, which means that those questions require more and deeper thinking.

Another strategy that promotes more thinking includes a confidence level indicator given along with the right answer. The greater the certainty that the answer is correct, the higher the confidence level. When scoring, a right answer and high confidence level are worth more than a right answer and a low confidence level.

Perhaps the most interesting way to make students think more deeply about the question doesn’t use questions per se but contains several sentences presented as a short essay. Some of the information in the essay is correct, and some of it is incorrect. The students must identify mistakes in the essay. The m/c options list different numbers of errors so that students need to select the correct number.

There’s also some pretty damning evidence cited in the article. A lot of us don’t write very good m/c questions, especially those who write questions for test banks. The most common problem in an analysis of almost 1,200 m/c questions from 16 undergraduate courses was “nonoptimal incorrect answers,” so designated if they were selected by fewer than 5 percent of the test takers. In other words, they’re obvious wrong answers because almost nobody is picking them, and that means more correct answers are selected by guessing.

Students want test questions to be fair. If they think tests are, research indicates students are more likely to study and do, in fact, learn more as a result of their studying. What they mean by “fairness” is pretty straightforward. The questions need to be clear: students should be able to figure out what the question is asking even if they can’t answer it. There should be a broad range of questions. If the test covers three chapters, there should be content from all three chapters. The authors recommend giving students sample test questions so they know what to expect and can’t persuade themselves that memorizing a few definitions will be all it takes to conquer the test.

The article recommends several guides for writing good m/c questions. It’s good to remember: the easiest questions to write are those that focus on specific information bits and have one right answer. Other details also merit consideration. For example, how many answer options should an m/c question have? Four? More? A fairly recent study looked three, four, or five options in terms of test reliability, response behaviors, item difficulty, and item fit statistics and found no evidence of significant differences between the numbers of answer options. A meta-analysis of 27 studies (across different content areas and age levels) recommends three options. There’s an interesting reason that favors three-answer options. On average, students can answer three-option questions five seconds faster than those with four or five options. That means more questions can be included on the exam. Imagine how popular that will be with students!

“Consistent meaningful feedback (e.g., detailed explanation of why certain answers were correct or incorrect) is an important component of student learning outcomes, enjoyment, engagement in the course and rating of teaching quality” (p. 151), are the findings of another study referenced in the review. This argues for more than posting the right answers on the professor’s office door or on the course website. The authors recommend an interesting way of providing this high-quality feedback, which is giving students opportunities to self-correct. Research shows that students who were allowed to turn in a self-corrected midterm performed better on the final than students who weren’t given this option. Both the in-class (or online) exam and the self-correct exam are scored. Students earn full credit if the answer on both exams is correct, partial credit if the question is missed on the in-class test but corrected on the take-home version, and no credit if it’s wrong on both tests.

As these highlights illustrate, this is an article packed full of good information. Students take tests seriously. We need to do our best to make those exams fair and accurate measures of learning.

Reference: Xu, X., Kauer, S., and Tupy, S. (2016). Multiple-choice questions: Tips for optimizing assessment in-seat and online. Journal of Scholarship on Teaching and Learning in Psychology, 2(2), 147–158.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon