Is It Time to Rethink Our Exams?

students taking a test

I’ve been ruminating lately about tests and wondering if our thinking about them hasn’t gotten into something of a rut. We give exams for two reasons. First, we use exams to assess the degree to which students have mastered the content and skills of the course. But like students, we can get too focused on this grade-generating function of exams. We forget the second reason (or take it for granted): exams are learning events. Most students study for them, perhaps not as much or in the ways we might like, but before an exam most students are engaged with the content. Should we be doing more to increase the learning potential inherent in exam experiences?

Teaching Professor BlogWe tend to see exams as isolated events, not learning experiences that can be enhanced by other activities within the course. I’m convinced that a well-structured exam review, one in which the students are doing the reviewing, not the teacher, can motivate test preparation, promote good exam study habits, and effectively integrate and add coherence to large chunks of course content. I believe we can structure exam debriefs to help students learn what they didn’t know or missed on the exam. We cannot accomplish that goal if teachers “go over” the most-missed questions. Students are the ones who made the mistakes. They need to correct them. We also can use debrief sessions to encourage examination of the strategies and approaches students used to prepare for the exam.

The types of exams we give are remarkably similar. For objective exams, we create multiple-choice questions, maybe some fill-ins, matching or short answer, occasionally some true/false questions, or problems to solve. For subjective exams, we provide essay questions. For the most part, we give exams during a designated time frame, with no access to resources or expertise, and then teachers grade them. These features have prevailed for decades. Generally, we use the same exam formats within an individual course, within the courses that make up a program, and pretty much across various disciplines. I’m not saying that there’s anything wrong with these types of exams, I’m only noting how widely and consistently we use them.

Previous posts in this blog and the Teaching Professor newsletter have described an interesting array of alternatives—crib sheets, student-written exams, student-generated test questions, group exams, and two-stage testing (where students do the exam in class, submit it, and get a new copy that they complete before the next class, with the in-class test counting more than the take home). There are other options, but use of them continues to be the exception, not the rule.

The prevailing norm endorses challenging, difficult exams. Teachers want their exams to be hard to show students and others that the course has standards and rigor. But exams can be too hard. Difficulty has a point of diminishing returns—if students decide that intense study isn’t going to get them a decent grade, they stop trying. Lots of low exam scores are not usually indicative of a good test or lazy students. The challenge is finding that sweet spot where the test functions to differentiate those who’ve mastered the material from those who haven’t.

There are aspects of the typical exam scenario that tend to be rather artificial. How often are professionals required to demonstrate their knowledge within a discreet time period without access to resources or expertise? Yes, there times when knowledge is needed immediately—the emergency room comes to mind. And there are times when there’s no access to resources or expertise—a windy day when a lake with lots of rocks and boat motor that has stopped running. So there are times, but the question is: how many?

I understand that grades need to measure how well an individual student has mastered the material, and that does justify how we administer exams. But I’m not convinced that those details a student has in his or her head at the time of a test are as important as being able to find and assess information as it’s needed.

What are your thoughts? Do our testing assumptions and practices merit a revisit? Could they be doing a better job of promoting learning?

© Magna Publications. All rights reserved.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
I’ve been ruminating lately about tests and wondering if our thinking about them hasn’t gotten into something of a rut. We give exams for two reasons. First, we use exams to assess the degree to which students have mastered the content and skills of the course. But like students, we can get too focused on this grade-generating function of exams. We forget the second reason (or take it for granted): exams are learning events. Most students study for them, perhaps not as much or in the ways we might like, but before an exam most students are engaged with the content. Should we be doing more to increase the learning potential inherent in exam experiences? Teaching Professor BlogWe tend to see exams as isolated events, not learning experiences that can be enhanced by other activities within the course. I’m convinced that a well-structured exam review, one in which the students are doing the reviewing, not the teacher, can motivate test preparation, promote good exam study habits, and effectively integrate and add coherence to large chunks of course content. I believe we can structure exam debriefs to help students learn what they didn’t know or missed on the exam. We cannot accomplish that goal if teachers “go over” the most-missed questions. Students are the ones who made the mistakes. They need to correct them. We also can use debrief sessions to encourage examination of the strategies and approaches students used to prepare for the exam. The types of exams we give are remarkably similar. For objective exams, we create multiple-choice questions, maybe some fill-ins, matching or short answer, occasionally some true/false questions, or problems to solve. For subjective exams, we provide essay questions. For the most part, we give exams during a designated time frame, with no access to resources or expertise, and then teachers grade them. These features have prevailed for decades. Generally, we use the same exam formats within an individual course, within the courses that make up a program, and pretty much across various disciplines. I’m not saying that there’s anything wrong with these types of exams, I’m only noting how widely and consistently we use them. Previous posts in this blog and the Teaching Professor newsletter have described an interesting array of alternatives—crib sheets, student-written exams, student-generated test questions, group exams, and two-stage testing (where students do the exam in class, submit it, and get a new copy that they complete before the next class, with the in-class test counting more than the take home). There are other options, but use of them continues to be the exception, not the rule. The prevailing norm endorses challenging, difficult exams. Teachers want their exams to be hard to show students and others that the course has standards and rigor. But exams can be too hard. Difficulty has a point of diminishing returns—if students decide that intense study isn’t going to get them a decent grade, they stop trying. Lots of low exam scores are not usually indicative of a good test or lazy students. The challenge is finding that sweet spot where the test functions to differentiate those who’ve mastered the material from those who haven’t. There are aspects of the typical exam scenario that tend to be rather artificial. How often are professionals required to demonstrate their knowledge within a discreet time period without access to resources or expertise? Yes, there times when knowledge is needed immediately—the emergency room comes to mind. And there are times when there’s no access to resources or expertise—a windy day when a lake with lots of rocks and boat motor that has stopped running. So there are times, but the question is: how many? I understand that grades need to measure how well an individual student has mastered the material, and that does justify how we administer exams. But I’m not convinced that those details a student has in his or her head at the time of a test are as important as being able to find and assess information as it’s needed. What are your thoughts? Do our testing assumptions and practices merit a revisit? Could they be doing a better job of promoting learning?

© Magna Publications. All rights reserved.