Student course evaluation data are being collected online for reasons difficult to argue against. The online administration process is standardized, it saves money (no paper costs), no class time is lost to collecting the data, feedback can be provided efficiently and without error, and students have time flexibility. Even though declining (sometimes precipitously declining) response rates are a major concern, there’s no talk of returning to paper-and-pencil, in-class, end-of-course rating procedures.
The decline in response rates once the ratings go online is happening pretty much across the board, at institutions large and small, teaching focused, and research intensive. The problem, of course, is that when response rates are low, the data cannot be considered representative. There is not universal agreement as to how low the response rate can go and still be representative. It’s usually offered as a range that on the low side is considered a liberal percentage and on the high side a stringent condition. If you have a class of 30 students, recommended low and high rates are 48 percent and 96 percent. When there are 50 students, the low and high rates are 35 percent and 93 percent of the students responding.
So, who’s filling out the course evaluations? There has been concern among faculty that it’s the disgruntled students, those who aren’t happy with their grades, aren’t interested in the content, and don’t particularly like the instructor. A good bit of research documents that the opposite is true. It’s the good students, those who are working hard and care about doing well in the course who do the evaluations. Either way, if a particular category of students is disproportionately completing the evaluations, that makes the data biased. That’s a problem if the results are being considered in promotion and tenure decisions and contract renewals or for merit raises and if individual faculty are making decisions about what to change based on the data.
The question that should be of concern at all levels within an institution is how to improve the response rates, and the work cited below addresses that question. Chapman and Joines (2017) asked 205 instructors with response rates of 70 percent or higher what strategies they used to encourage students to complete evaluations. The 120 who responded identified which of 15 possible strategies they used. They could identify more than one and could add any strategies they used that were not on the list.
Three strategies decisively topped the list: (1) talking about the importance of ClassEval [what end-of-course ratings are called at this research university] in class, which was selected by 87 percent of the cohort; (2) working to create a climate in class that reflects mutual respect between instructor and students, which was selected by 83 percent; and (3) telling students how student evaluation feedback is used to modify the course, which was selected by 78 percent. The next strategy on the list involved sending reminders to students, selected by 35 percent. Also of note, these instructors used multiple strategies, on average 4.3 different ones.
It’s interesting, perhaps encouraging, that incentives were not high on the list for this faculty group. They were not dropping a low assignment grade if students completed the evaluation, not offering bonus points, or bringing snacks to class. Goodman, Anson, and Belcheir (2015) report that incentives were used and did increase response rates. Obviously, there are ethical issues involved when students are enticed to provide feedback. Who does that practice motivate and how does it influence the feedback they provide? In this case, the institution has a policy against offering incentives. A student cannot be penalized for failing to complete the course evaluation, and instructors are not allowed to provide incentives like bonus points, extra credit, or dropped scores. Despite this policy, a few respondents did report using incentives. For example, 13 percent awarded bonus points if student did an evaluation, 4 percent dropped a low assignment grade, and 2 percent said they brought snacks to class in the hopes of raising response rates.
Incentives may work. Faculty give students points for all sorts of actions taken in and out of class, and students are known to be motivated to do things for points. But there are ethical considerations. The valuable outcome of this particular work is its documentation that more ethical incentives also work. If teachers let students know that their feedback is respected and valued and if they can point to ways the course has been changed based on student feedback, that can motivate students to complete online course evaluations. The message that feedback matters should be communicated throughout the course.
References:
Chapman, D. D., and Joines, J. A. (2017). Strategies for increasing response rates for online end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47–60.
Goodman, J., Anson, R., and Belcheir, M. (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40(7), 958–970.