More from this author

Archives

Get the Latest Updates

Subscribe To Our Weekly Newsletter

Magna Digital Library

More from this author

wpChatIcon

Multiple-choice questions get a bad rap, and it is easy to see why. Most do not assess higher-order thinking—you either know the answer or you don’t. And if you do know the correct choice, it does not reveal depth of knowledge. So how can we construct them to better assess learning?

Let’s consider two version of a question taken from a large survey course I teach to undergraduate nonmajors about the biological bases of behavior. Both are meant to assess how well students can identify scientific tools for studying different research questions.

Which of these provides the best spatial resolution?

A. fMRI (functional magnetic resonance imaging)
B. PET (positron emissions tomography)
C. CAT (computer axial tomography)
D. EEG (electroencephalogram) 

The correct answer is (A) fMRI. How is this knowledge valuable? The question does not assess that. Here is another version, with the same correct answer and choices.

Dr. Pons investigates how the brain perceives human faces. In an upcoming experiment, she wants to find out whether seeing a male face compared to a female face is (1) faster and (2) done by a different part of the brain. To answer these questions, which of the following tools would BEST answer Dr. Pons’ research questions?

This version of the question requires a student to evaluate the different tools to answer it correctly. Here are six guidelines that have helped me make the most of my multiple-choice questions.

Guideline 1: Make sure each incorrect option is plausible but clearly incorrect.

Do not waste time and space with implausible options. For example, “flashlight” would be implausible, and any question that it brings about is likely off topic. 

All the choices listed are plausible. Our readings reviewed notable experiments using each one. This deliberation period is a chance to review them again. And the incorrect options are definitely incorrect. EEG is not used to study regional differences, CAT does not measure function, and PET does not monitor speed well.

Guideline 2: Make sure that the correct answer is clearly correct.

To answer her questions, Dr. Pons needs a tool that can report on both activation speed and regional differences. Only option (A) fMRI can do both, making this the one clear correct answer. Avoid “Select all that apply” or options such as “A and B” or “A and C.” Make only one answer correct.

Guideline 3: Avoid, if possible, using “All of the above.” 

In my example, I want to know whether students understand how the tools are different. If they are unsure of any one of them, this exercise would identify questions about those specific tools. Suppose “E. All of the above” were a choice and was a student’s selection. Because it is incorrect, I would know that they needed help. But what? I would be back to helping them identify their questions. 

What if “All of the above” were the correct answer? Here is a version of the question for which that is true:

Dr. Pons investigates how the brain perceives human faces. Which of the following are tools she can use for her research?

This “All of the above” correct answer fails to assess whether students can identify tools appropriate for different research questions. A student may be able to, but their correct answer masks their level of knowledge. The same applies to the next guideline. 

Guideline 4: Use “None of the above” with caution.

If “None of the above” were the correct answer to a question? Here is a version of the question for which that is true:

Dr. Pons investigates how the brain perceives human faces. Which of the following cannot be used for her research?

Any uncertainty or understanding of each tool would not be evident. Suppose a student correctly chose “None of the above.” As in Guideline 3, their level of knowledge is masked.

Guideline 5: Try to keep options of similar lengths.

Test-wise students will pick the longest option if they are unsure of the answer—the logic being that it is “too long to be wrong.” Imagine that in our example, the choices were as follows:

A. 7-Telsa functional magnetic resonance imaging
B. PET  
C. CAT  
D. EEG  

The greater detail could sway a student to think it has more merit if they were uncertain about the other options. Following this logic, I would also avoid having one oddly short option. 

If this is not feasible, make two options of similarly odd lengths: two short or two long. We want a student to choose or avoid an oddity, whether correct or incorrect, because of its content.  

Guideline 6: Have students team up in small groups to make a choice together.  

Students who have a question might not share it in class lest they come across “dumb.” But in small groups, minus the teacher, they may be less intimidated. Maybe another student will answer their question. If not, this means other students also do not know the answer or share the same question. Now it has become a good question to ask the teacher together.

The small group can also help the student who is perplexed but does not yet have the words to pinpoint their confusion. Here, there is an opportunity for discussion and exchange of ideas. These conversations can help students identify the words to formulate their question.

While each student can enhance one another’s learning, they can also dwell on the implausible.  The multiple-choice question format provides structure that focuses the discussion and saves time. It ensures your students efficiently discuss specific items you determined are most valuable.

Reference

Sibley, J., & Ostafichuk, P., with Roberson, B., Franchini, B., & Kubitz, K. (2014). Readiness assurance process: Writing good multiple-choice questions. In Getting started with team-based learning (pp. 91–97). Stylus.


Minna Ng, PhD, is assistant professor of the practice of psychology and neuroscience at Duke University.