Does the Strategy Work? A Look at Exam Wrappers

group of students taking exam

For many faculty, adding a new teaching strategy to our repertoire goes something like this. We hear about an approach or technique that sounds like a good idea. It addresses a specific instructional challenge or issue we’re having. It’s a unique fix, something new, a bit different, and best of all, it sounds workable. We can imagine ourselves doing it.

Let’s consider how this works with an example. Have you heard of exam wrappers? When exams are returned they come with a “wrapper” on which students are asked to reflect, usually on three areas related to exam performance:

  • what study skills they used to prepare;
  • what types of mistakes they made on the exam; and
  • what modifications might improve their performance on the next test.

At a strategic moment, this technique confronts students with how they prepared and performed with an eye on the exams still to come. It’s an approach with the potential to develop metacognition—that reflective analysis and resultant insight needed to understand, in this case, what actions it takes to learn content and perform well on exams.

But is there any evidence that exam wrappers improve performance and promote metacognitive insights? For a lot of instructional strategies, we still rely on the instructor’s opinion. However, in the case of exam wrappers, we do have evidence—just not a lot and the results are mixed. In this most recent study (with a robust design), they didn’t work. Researchers Soicher and Gurung found that the wrappers didn’t change exam scores, final course grades, or scores on the Metacognitive Awareness Inventory, an empirically developed instrument that measures metacognition. Examples where exam wrappers did have positive outcomes are referenced in the study.

What instructors most want to know about any strategy is whether it works. Does it do what it’s supposed to do? We’d like the answer to be clear cut. But in the case of exam wrappers, the evidence doesn’t indicate if they’re a good idea or not. That’s frustrating, but it’s also a great example of how conflicting results lead to better questions—the ones likely to move us from a superficial to a deeper understanding of how different instructional strategies “work”.

What could explain the mixed results for exam wrappers? Does the desired outcome depend on whether students understand what they’re being asked to do? Students are used to doing what teachers tell them, pretty much without asking themselves or the teacher questions. As these researchers note, maybe students don’t “recognize the value or benefit of metacognitive skills” as they are intended to be developed by exam wrappers (p. 69).

Is effectiveness a function of how many exam wrappers a student completes? Would they be more effective if they were used in more than one course? Maybe the issue is what students write on the wrapper. Would it help if the teacher provided some feedback on what students write? In other words, the effectiveness of exam wrappers could be related to a set of activities that accompany them. Maybe they don’t work well if they’re just a stand-alone activity.

It could also be that students are doing exactly what we ask of them. For example, they see that they’re missing questions from the reading, and so they write that they need to keep up with the reading and not wait until the night before the exam to start doing it. But despite these accurate assessments, they still don’t keep up with the reading. Students have been known to abandon ineffective study routines reluctantly, even after repeated failure experiences.

There’s a lot more complexity than meets the eye with almost every instructional strategy. We’d love for them to be sure-fire fixes; supported with evidence and with predictably reliable outcomes. Unfortunately, how instructional strategies affect learning is anything but simple, and our thinking about them needs to reflect this complexity.

We could conclude that with mixed results and instructional contexts so variable, there’s no reason to look at the research or consider the evidence. Wrong! The value of these systematic explorations is not so much the findings but their identification of details with the potential to make a difference. So, you may start by thinking that exam wrappers are a cool idea, but that’s not all you’ve got. Sets of conditions and factors made a difference when someone else used them and took a systematic look at their effects on learning. That means you can use them having made some purposeful decisions about potentially relevant details.

Reference: Soicher, R. N. and Gurung, R. A. R., (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning and Teaching, 16 (1), 64-73.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
For many faculty, adding a new teaching strategy to our repertoire goes something like this. We hear about an approach or technique that sounds like a good idea. It addresses a specific instructional challenge or issue we’re having. It’s a unique fix, something new, a bit different, and best of all, it sounds workable. We can imagine ourselves doing it. Let’s consider how this works with an example. Have you heard of exam wrappers? When exams are returned they come with a “wrapper” on which students are asked to reflect, usually on three areas related to exam performance: At a strategic moment, this technique confronts students with how they prepared and performed with an eye on the exams still to come. It’s an approach with the potential to develop metacognition—that reflective analysis and resultant insight needed to understand, in this case, what actions it takes to learn content and perform well on exams. But is there any evidence that exam wrappers improve performance and promote metacognitive insights? For a lot of instructional strategies, we still rely on the instructor’s opinion. However, in the case of exam wrappers, we do have evidence—just not a lot and the results are mixed. In this most recent study (with a robust design), they didn’t work. Researchers Soicher and Gurung found that the wrappers didn’t change exam scores, final course grades, or scores on the Metacognitive Awareness Inventory, an empirically developed instrument that measures metacognition. Examples where exam wrappers did have positive outcomes are referenced in the study. What instructors most want to know about any strategy is whether it works. Does it do what it’s supposed to do? We’d like the answer to be clear cut. But in the case of exam wrappers, the evidence doesn’t indicate if they’re a good idea or not. That’s frustrating, but it’s also a great example of how conflicting results lead to better questions—the ones likely to move us from a superficial to a deeper understanding of how different instructional strategies “work”. What could explain the mixed results for exam wrappers? Does the desired outcome depend on whether students understand what they’re being asked to do? Students are used to doing what teachers tell them, pretty much without asking themselves or the teacher questions. As these researchers note, maybe students don’t “recognize the value or benefit of metacognitive skills” as they are intended to be developed by exam wrappers (p. 69). Is effectiveness a function of how many exam wrappers a student completes? Would they be more effective if they were used in more than one course? Maybe the issue is what students write on the wrapper. Would it help if the teacher provided some feedback on what students write? In other words, the effectiveness of exam wrappers could be related to a set of activities that accompany them. Maybe they don’t work well if they’re just a stand-alone activity. It could also be that students are doing exactly what we ask of them. For example, they see that they’re missing questions from the reading, and so they write that they need to keep up with the reading and not wait until the night before the exam to start doing it. But despite these accurate assessments, they still don’t keep up with the reading. Students have been known to abandon ineffective study routines reluctantly, even after repeated failure experiences. There’s a lot more complexity than meets the eye with almost every instructional strategy. We’d love for them to be sure-fire fixes; supported with evidence and with predictably reliable outcomes. Unfortunately, how instructional strategies affect learning is anything but simple, and our thinking about them needs to reflect this complexity. We could conclude that with mixed results and instructional contexts so variable, there’s no reason to look at the research or consider the evidence. Wrong! The value of these systematic explorations is not so much the findings but their identification of details with the potential to make a difference. So, you may start by thinking that exam wrappers are a cool idea, but that’s not all you’ve got. Sets of conditions and factors made a difference when someone else used them and took a systematic look at their effects on learning. That means you can use them having made some purposeful decisions about potentially relevant details. Reference: Soicher, R. N. and Gurung, R. A. R., (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning and Teaching, 16 (1), 64-73.