Course Evaluations: How Can Should We Improve Response Rates?

female student at computer

Shortly after 2000, higher education institutions started transitioning from paper and pencil student-rating forms to online systems. The online option has administrative efficiency and economics going for it. At this point, most course evaluations are being conducted online. Online rating systems have not only institutional advantages but also advantages for students: students can take as much (or little) time as they wish to complete the form, their anonymity is better preserved, and several studies have reported an increase in the number of qualitative comments when evaluations are offered online. Other studies document that overall course ratings remain the same or are slightly improved in the online format.

Teaching Professor Blog But not all the news is good. Response rates drop significantly when students do the ratings online, from 70–80% for paper and pencil forms to 50–60% online. A 2008 review of nine comparison studies reported that online response rates averaged 23% lower than traditional formats. These low response rates raise the issue of generalizability. What percentage of students in a course need to respond for the results to be representative? The answer depends on a number of variables, most notably class size. For a class of 20 students, one expert puts the minimum at 58%. As class size increases, the percentage drops. Despite some disagreement as to the percentages, there is consensus that online response rates should be higher than they are right now.

Goodman, Anson, and Belcheir surveyed 678 faculty across a range of disciplines asking them to report how they were trying to boost online response rates. Among those surveyed, 13% reported that they did nothing to improve the rates and that, on average, 50% of their students completed the forms. Those who did something to encourage students to complete the evaluations generated response rates of 63%. The most common approaches faculty reported were the ones we’d expect. They reminded students to complete the forms, which upped the response rate to 61%, and they explained how the results helped them improve instruction, which bumped the rate up to 57%. But what improved response rates the most (roughly 22%) was to provide students with incentives.

The incentives were grouped as point-based or non-point-based and individual or class-wide. Points were by far the most common incentive, used by 75% of the faculty who reported offering incentives. Points given for completing the evaluation ranged from 0.25 to 1% of the total grade. The most common class-based incentive involved setting a target response rate—say, 80%—and then rewarding the class if the target was reached. For example, students could use a crib card during the final (non-point-based) or got a designated number of bonus points. In an earlier blog post, I described an institutional incentive in which those who completed course evaluations got early access to their grades.

This study of incentives explored other interesting issues (which makes it worth reading), but I want to use the rest of today’s post to focus on using incentives to increase response rates. I can understand why faculty do it. Ratings are a part of promotion and tenure processes, and they affect adjunct employment and sometimes merit raises too, so I’m not interested in moral judgments about individuals who have decided to do.

But regardless of what we do in our courses, all of us need to think about the implications of the practice. What messages do we convey to students when we offer incentives to complete course evaluations? Does it change the value of the feedback? We also should consider why response rates are so low. Is it because once students reach the end of the course, they just want to be done and aren’t really interested in helping to improve the course for students who will take it after them? Or have they grown tired of all these course evaluations and don’t think their feedback makes any difference anyway?

Perhaps we all can agree that offering incentives to complete the evaluations doesn’t get students doing ratings for the right reason. Students should offer assessments because their instructors benefit from student feedback the same way students learn from teacher feedback. They should be doing ratings because reflecting about courses and teachers enables students to better understand themselves as learners. They should be doing these end-of-course evaluations because they believe the quality of their experiences in courses matters to the institution.

The bottom line question: Is there any way to get students doing ratings for the right reasons? Please, let’s have a conversation about this.

Reference: Goodman, J., Anson, R. and Belcheir, M., (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40 (7), 958-970.

© Magna Publications. All rights reserved.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon