Students Can Write Good Exam Questions

I recently discovered a 2014 study that reported on student-generated multiple-choice questions. It was the results that really caught my attention: “We find that these first-year students are capable of producing very high quality questions and explanations” (Bates, Galloway, Riise, & Horner, 2014, p. 10). The research team reported that 75 percent of the student questions met these criteria: they were “clear, correct, require more than simple factual recall to answer, and possess a correct solution and plausible distractors” (p. 10). That’s impressive!

Teaching Professor Blog

How did those teaching accomplish these results? Two different factors appear to have been key. First, writing multiple-choice questions was a graded assignment in two sequenced introductory physics courses. The assignment had students contribute one individually authored question, answer five other questions, and provide comments and ratings on three other questions. The assignment was worth approximately 3 percent of the course grade each time the students completed it, and that varied by section. The instructors used a free online tool, PeerWise, as the technology platform for this assignment.

Second, a 90-minute class session was devoted to preparing students to write the multiple-choice questions. Activities in that session included a “content-neutral” quiz from which students learned how to construct multiple-choice questions (stems, distractors, etc.). The quiz also provided examples of poorly written questions. Another self-diagnostic tool helped students “explore their beliefs about thinking and guide them toward learning orientation and away from performance orientation” (p. 3). Students were challenged to write questions just beyond their level of understanding, and they were given a good and challenging sample question to illustrate. The session ended with students working in small groups to collectively generate a question following the guidelines provided. These questions were uploaded to seed the question pool.

The research team used three different assessment criteria to evaluate each of the more than 600 questions students generated across the four sections used in the study. It involved analysis of the questions via Bloom’s taxonomy, a set of criteria for assessing the quality of the explanations, and consideration of the question’s overall quality. The article includes full details on each of these criteria.

For many of us who teach, the question will be whether there’s time to devote a course session to developing the skills students need to write good multiple-choice questions. The authors offer this justification: “Cognitively, it can be far more challenging to have to create an assessment activity, for example, a question complete with solution and explanation, than it is to simply answer one created by someone else. It can require higher order skills far above simply ‘remembering’ or ‘knowing’” (p. 1). They point out that most of us have discovered how challenging it is to write good test questions.

Those of us who’ve had students write potential test questions have learned not to expect very good ones. Students with little or no experience writing potential test questions start to quickly figure out that writing simple, factual-recall questions is a lot easier than creating those that test high-order thinking skills. Besides, students can’t help but be motivated to write the kind of questions they’d like to see on the exam. But this assignment, coupled with the in-class instruction, enables students to write questions via an activity that enhances the learning potential of the experience. Added motivation comes from knowing that their peers will answer their questions, rate them, and make comments about them. All this combines to make high-quality questions a probable outcome.

As for taking time away from content, here’s another activity where content is what’s used in the learning process. Students are learning how to write multiple-choice questions, but the questions use the course content. It’s hard to imagine that writing a multiple-choice question, writing an explanation, answering questions written by others, and evaluating the quality of those questions wouldn’t results in some pretty significant content learning. Beyond all that, the activity rests on research evidence that students’ asking and answering questions about the content promotes their learning it.

One final thought: When we have students write potential exam questions, aren’t we really teaching them how to ask good questions, and isn’t that a central part of learning anything?

Reference

Bates, S. P., Galloway, R. K., Riise, J. and Horner, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics—Physics Education Research, 10, 1–10. https://doi.org/10.1103/PhysRevSTPER.10.020105 [open access]


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

I recently discovered a 2014 study that reported on student-generated multiple-choice questions. It was the results that really caught my attention: “We find that these first-year students are capable of producing very high quality questions and explanations” (Bates, Galloway, Riise, & Horner, 2014, p. 10). The research team reported that 75 percent of the student questions met these criteria: they were “clear, correct, require more than simple factual recall to answer, and possess a correct solution and plausible distractors” (p. 10). That’s impressive!

Teaching Professor Blog

How did those teaching accomplish these results? Two different factors appear to have been key. First, writing multiple-choice questions was a graded assignment in two sequenced introductory physics courses. The assignment had students contribute one individually authored question, answer five other questions, and provide comments and ratings on three other questions. The assignment was worth approximately 3 percent of the course grade each time the students completed it, and that varied by section. The instructors used a free online tool, PeerWise, as the technology platform for this assignment.

Second, a 90-minute class session was devoted to preparing students to write the multiple-choice questions. Activities in that session included a “content-neutral” quiz from which students learned how to construct multiple-choice questions (stems, distractors, etc.). The quiz also provided examples of poorly written questions. Another self-diagnostic tool helped students “explore their beliefs about thinking and guide them toward learning orientation and away from performance orientation” (p. 3). Students were challenged to write questions just beyond their level of understanding, and they were given a good and challenging sample question to illustrate. The session ended with students working in small groups to collectively generate a question following the guidelines provided. These questions were uploaded to seed the question pool.

The research team used three different assessment criteria to evaluate each of the more than 600 questions students generated across the four sections used in the study. It involved analysis of the questions via Bloom’s taxonomy, a set of criteria for assessing the quality of the explanations, and consideration of the question’s overall quality. The article includes full details on each of these criteria.

For many of us who teach, the question will be whether there’s time to devote a course session to developing the skills students need to write good multiple-choice questions. The authors offer this justification: “Cognitively, it can be far more challenging to have to create an assessment activity, for example, a question complete with solution and explanation, than it is to simply answer one created by someone else. It can require higher order skills far above simply ‘remembering’ or ‘knowing’” (p. 1). They point out that most of us have discovered how challenging it is to write good test questions.

Those of us who’ve had students write potential test questions have learned not to expect very good ones. Students with little or no experience writing potential test questions start to quickly figure out that writing simple, factual-recall questions is a lot easier than creating those that test high-order thinking skills. Besides, students can’t help but be motivated to write the kind of questions they’d like to see on the exam. But this assignment, coupled with the in-class instruction, enables students to write questions via an activity that enhances the learning potential of the experience. Added motivation comes from knowing that their peers will answer their questions, rate them, and make comments about them. All this combines to make high-quality questions a probable outcome.

As for taking time away from content, here’s another activity where content is what’s used in the learning process. Students are learning how to write multiple-choice questions, but the questions use the course content. It’s hard to imagine that writing a multiple-choice question, writing an explanation, answering questions written by others, and evaluating the quality of those questions wouldn’t results in some pretty significant content learning. Beyond all that, the activity rests on research evidence that students’ asking and answering questions about the content promotes their learning it.

One final thought: When we have students write potential exam questions, aren’t we really teaching them how to ask good questions, and isn’t that a central part of learning anything?

Reference

Bates, S. P., Galloway, R. K., Riise, J. and Horner, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics—Physics Education Research, 10, 1–10. https://doi.org/10.1103/PhysRevSTPER.10.020105 [open access]