An Active Learning Exploration: Two-Stage Exams

Research on active learning is moving beyond the “does it work better than lecture?” question to explore how particular kinds of active learning experiences influence learning. How appropriate and welcome! Do some of its many techniques promote learning better than others? Which ones? And what kind of learning results from their use? The answers to those questions give teachers the information they need to make more informed choices.

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

Research on active learning is moving beyond the “does it work better than lecture?” question to explore how particular kinds of active learning experiences influence learning. How appropriate and welcome! Do some of its many techniques promote learning better than others? Which ones? And what kind of learning results from their use? The answers to those questions give teachers the information they need to make more informed choices.

For Those Who Teach from Maryellen Weimer

A recent “retrospective analysis” of content retention in a human physiology course illustrates the kind of comparative work that advances what we know and how we can use active learning strategies effectively (Ford, 2019). The study looked at three years of data collected in three different versions of the course: a flipped classroom with individual exams, a lecture-based course with individual exams, and a lecture-based course with two-stage exams. Each version of the course contained the same content and used the same text, and unit exams were taken on the same days. Students selected the course option they preferred.

Students in the flipped course heard lectures in the course, had recorded lecture material available for review outside of class, completed homework, and did mini problems during class. For the two-stage exam option—lecture plus a group test activity—students took the exam individually first and the next period did the same exam with three to five classmates. The individual exam counted for 90 percent of the grade and the group exam for 10 percent.

The study is unique in that retention of the content was measured with a standardized exam developed by the Human Anatomy and Physiology Society (HAPS). It’s typically administered the end of the sophomore year (which it was here) and is accepted within the field as a valid and reliable measure of learning outcomes. An extra credit scheme was used to incentivize students to take this extra exam.

Those who participated in the two-stage exam sections scored significantly higher on the HAPS test (58.72) than students in the lecture sections without group exams (54.29) and in the flipped courses (53.31). The researcher reports being surprised that students in the flipped sections did not outperform students in the lecture sections. Could that be because students determined the extent to which they used the out-of-class resources? Or maybe those out-of-class activities did not engage the students as deeply as did discussion of exam questions? Whatever the reason, this finding adds to other results documenting that “flipping” content acquisition to out-of-class activities doesn’t automatically improve learning.

Group exam experiences, which now go by a host of different names, continue to produce positive effects on exam scores, on course grades, and, in this case, on a standardized test. They do so because they involve strategies that have been shown to promote learning. The reference in this study is to the “testing effect,” which has consistently shown that when students are repeatedly tested on content, their long-term retention improves. There’s also evidence that student discussion of content—their attempts to explain it to each other, to defend and justify their answers, to raise questions and offer alternatives—leads to deeper understanding of the material. Students regularly report that collaborating with peers on exam questions reduces exam anxiety, and for some students, that’s a significant benefit.

A pragmatic detail worth noting: even when the group exam counts for a small portion of the total exam score (10 percent here), that’s still enough to motivate serious discussion of the exam questions and problems. Ford, who authored the article and taught some of these sections, notes that he’s used group exams for a number of years and in classes enrolling anywhere between 38 and 260 students.

What’s the disadvantage? Slightly inflated student exam scores, Ford notes. He goes on to point out that this inflation did not significantly affect grade distributions. Good. But the more salient point might be that students who participated in the group exams learned more and so earned those slightly inflated scores. Now, is it fair to give students a second opportunity to learn course content? I’m so on the side of learning that I can’t muster much motivation for that debate. Whether it takes two, three, or 10 times for students to grasp important material seems trivial when the alternative is not learning it at all.

Reference

Ford, D. (2019). A three-year retrospective analysis comparing student retention of human physiology concepts in flipped, lecture, and two-stage cooperative testing classrooms. Journal of College Science Teaching, 49(1), 59–63.