The Testing Effect and Regular Quizzes

The “testing effect,” as it’s called by cognitive psychologists, seems pretty obvious to faculty. If students are going to be tested on material, they will learn it better and retain it longer than if they just study the material. And just in case you had any doubts, lots of evidence has been collected in labs and simulated classrooms that verifies the existence of this testing effect. But as with much of the research done in cognitive psychology, it has not been studied much in actual classrooms, and of specific interest here, in college classrooms. When it has been studied in college classrooms, the results aren’t as consistent as might be expected, but then the study designs aren’t all that similar.

The use of quizzes offers a good arena in which to study the testing effect. Students are regularly tested on course material, and that repeated testing should improve their exam and final scores. However, design details may influence the outcome. How many quizzes would students need to take to gain the testing effect benefit? Does it matter if the quizzes are announced or if they’re pop quizzes? Should the quizzes be graded or ungraded? If graded, does it matter how much they count? Is the testing benefit present if the quiz questions come from material covered in class? What if the quiz questions come from assigned reading before that material is covered in class? Does the testing effect apply to certain kinds of questions but not others—say, test questions that are the same as the quiz questions, or similar to the quiz questions, or totally new questions?

What we really need here are a set of best practices—those design details that most reliably achieve the desired results. The caveat, of course, is that any set of best practices in the teaching and learning realm are the ones that usually work best. With different student cohorts learning different content from different teachers at different kinds of institutions, there are too many variables to expect consistent results. Best practices have value in that they offer a place to start.

A recent study of quizzing in introductory level psychology courses explored some of the questions regarding the design details of a quiz strategy. In the control section, each class session had a designated topic and assigned reading pertaining to that topic. Some of the reading material was discussed in class, and some was not. The instructor regularly encouraged students to keep up with the reading.

In the experimental section, students had the same content schedule and reading assignments, but they had a quiz every class session. The quizzes included two multiple-choice questions from content covered in the previous session and three questions from assigned reading not covered in class. The quizzes were graded and counted for 25 percent of the final course grade.

Both sections took three exams, and each of those exams included 15 questions from the assigned readings (plus other questions unique to each class). Some of those questions were the same questions used on the quiz, some were similar, and some were entirely new questions.

The quiz section “scores were significantly higher than the control class” (2017, 21), and they were higher on all three types of questions. A survey of students in the quiz section also revealed that anticipating daily quizzes helped the students study more, encouraged them to read more, reduced the amount of cramming, and prompted students to change their study habits.

Another study referenced in this research found the presence of the testing effect for ungraded quizzes but not for graded pop quizzes. These researchers wonder if the predictability of a quiz every class session reduced the anxiety associated with always wondering if today was going to be a quiz day.

This research doesn’t answer all of the quiz design questions, but it does address some of them. And although these answers may not be definitive, they illustrate how the details of an instructional approach, such as using quizzes, can be explored empirically. Cognitive psychology has validated the testing effect. Classroom research like this begins to identify the details that make it work reliably in actual teaching situations.

Reference: Batsell Jr., W.R., J.L Perry., E. Hanley, and A.B. Hostetter. 2017. Ecological validity of the testing effect: The use of daily quizzes in introductory psychology. Teaching of Psychology 44(1): 18–23.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
The “testing effect,” as it's called by cognitive psychologists, seems pretty obvious to faculty. If students are going to be tested on material, they will learn it better and retain it longer than if they just study the material. And just in case you had any doubts, lots of evidence has been collected in labs and simulated classrooms that verifies the existence of this testing effect. But as with much of the research done in cognitive psychology, it has not been studied much in actual classrooms, and of specific interest here, in college classrooms. When it has been studied in college classrooms, the results aren't as consistent as might be expected, but then the study designs aren't all that similar. The use of quizzes offers a good arena in which to study the testing effect. Students are regularly tested on course material, and that repeated testing should improve their exam and final scores. However, design details may influence the outcome. How many quizzes would students need to take to gain the testing effect benefit? Does it matter if the quizzes are announced or if they're pop quizzes? Should the quizzes be graded or ungraded? If graded, does it matter how much they count? Is the testing benefit present if the quiz questions come from material covered in class? What if the quiz questions come from assigned reading before that material is covered in class? Does the testing effect apply to certain kinds of questions but not others—say, test questions that are the same as the quiz questions, or similar to the quiz questions, or totally new questions? What we really need here are a set of best practices—those design details that most reliably achieve the desired results. The caveat, of course, is that any set of best practices in the teaching and learning realm are the ones that usually work best. With different student cohorts learning different content from different teachers at different kinds of institutions, there are too many variables to expect consistent results. Best practices have value in that they offer a place to start. A recent study of quizzing in introductory level psychology courses explored some of the questions regarding the design details of a quiz strategy. In the control section, each class session had a designated topic and assigned reading pertaining to that topic. Some of the reading material was discussed in class, and some was not. The instructor regularly encouraged students to keep up with the reading. In the experimental section, students had the same content schedule and reading assignments, but they had a quiz every class session. The quizzes included two multiple-choice questions from content covered in the previous session and three questions from assigned reading not covered in class. The quizzes were graded and counted for 25 percent of the final course grade. Both sections took three exams, and each of those exams included 15 questions from the assigned readings (plus other questions unique to each class). Some of those questions were the same questions used on the quiz, some were similar, and some were entirely new questions. The quiz section “scores were significantly higher than the control class” (2017, 21), and they were higher on all three types of questions. A survey of students in the quiz section also revealed that anticipating daily quizzes helped the students study more, encouraged them to read more, reduced the amount of cramming, and prompted students to change their study habits. Another study referenced in this research found the presence of the testing effect for ungraded quizzes but not for graded pop quizzes. These researchers wonder if the predictability of a quiz every class session reduced the anxiety associated with always wondering if today was going to be a quiz day. This research doesn't answer all of the quiz design questions, but it does address some of them. And although these answers may not be definitive, they illustrate how the details of an instructional approach, such as using quizzes, can be explored empirically. Cognitive psychology has validated the testing effect. Classroom research like this begins to identify the details that make it work reliably in actual teaching situations. Reference: Batsell Jr., W.R., J.L Perry., E. Hanley, and A.B. Hostetter. 2017. Ecological validity of the testing effect: The use of daily quizzes in introductory psychology. Teaching of Psychology 44(1): 18–23.