Are students taking their end-of-course evaluation responsibilities seriously? Many institutions ask them to evaluate every course and to do so at a time when they’re busy with final assignments and stressed about upcoming exams. Response rates have also fallen at many places that now have students provide their feedback online. And who hasn’t gotten one or two undeserved low ratings—say, on a question about instructor availability when the instructor regularly came early to class, never missed a class, and faithfully kept office hours? Are students even reading the questions?
There’s some comfort to be found in a survey of almost 600 students enrolled in a wide range of degree programs at four different kinds of institutions. “We found that the majority of students generally held positive views about their role in the evaluation process and that they reported taking the evaluation process seriously.” (p. 311). A bit over 66 percent agreed or strongly agreed that student evaluations of teachers were useful, and 95 percent indicated that they often or very often honestly assessed the instructor’s teaching ability. Were the students giving answers they deemed socially desirable? The researchers tested a subset of the larger sample for that and found evidence that the students were not reporting what they believed were socially appropriate responses. Despite the positive views of a majority, over 16 percent of the students did not think the ratings were useful, and over 18 percent considered them a waste of time.
Students in this cohort were not clear about how their ratings were used. Just under 14 percent agreed or strongly agreed that the ratings had an effect on professors’ salaries, and just under 30 percent thought that they were used in tenure decisions. Related, less than half the students reported writing comments in response to open-ended questions, and that’s probably because only 43 percent believed that their comments were often or very often read by professors. Most faculty read those comments religiously, often finding them more useful than the numerical scores. An occasional mention of what was learned from the ratings and what changes were made as a result of them can let students know that their comments are read and considered and some course aspects may be changed as a result. This message can be reinforced if student feedback is collected midcourse and the results discussed with them.
The sample included students enrolled in 107 different majors, which the researchers placed in seven categories. They found “scant” evidence that student views of the rating process varied by major, and student views were also not related to the kind of institution they were attending. The same views were held regardless of class standing or gender as well.
The survey also contained an open-ended query: “Describe your general perceptions and beliefs about the course evaluations you complete each semester about your professors.” It generated a wide range of responses, including a fairly regular mention of time students in classes are given to complete the evaluation. When the surveys were passed out at the end of the period, students reported that they felt rushed to complete them. The researchers note that not giving students enough time conveys messages about the value of the activity.
These data do not rule out the possibility that some students take the evaluation process less seriously than it deserves. But according to these survey results, those students are not in the majority.
Reference: Kite, M.E., Subedi, P.C., and Bryant-Lees, K.B., (2015). Students’ perceptions of the teaching evaluation process. Teaching of Psychology, 42 (4), 307-314.