We regularly get course evaluation results, and they aren’t the kind of feedback most of us want. At least, that’s what the results of a recent survey showed. Questionnaire responses from almost 350 biology faculty members representing 185 different institutions found that 41 percent were dissatisfied with end-of-course evaluations and 46 percent were only satisfied with them “in some ways.” The reasons given for the dissatisfaction were many: the evaluations didn’t provide constructive feedback; response rates were poor; the evaluation questions didn’t align with the instructor’s objectives; the focus was on student satisfaction, not learning; and the process wasn’t designed to really engage students in providing useful and insightful feedback. It did not matter where these faculty respondents taught. Even those at institutions where teaching was ostensibly valued were not satisfied with course evaluation feedback. And it did not matter what sort of teaching practices the respondents reported using. Those who lectured were just as unhappy with course evaluation processes as those who used active learning.
Almost 70 percent of these biology faculty received peer feedback, mostly from classroom observations, and the respondents valued peer feedback more highly than the input received from students. Even so, as the research team notes, peer observations are not “without their share of problems” (90). They aren’t conducted uniformly. Only half the respondents reported use of a form or feedback template to guide the observer. If the review is part of a promotion and tenure requirement, faculty responses suggest that the observations “may be a rubber stamp rather than a real opportunity for critical feedback” (9). And as has been confirmed by studies of peer review across the years, peer assessments tend to be more positive than student evaluations.
Survey responses also described the kinds of comments peer observers typically offered. Most frequently they concerned rapport and interaction with students, and feedback on “lecture-related behaviors” such as clarity of explanations, organization, speaking style, content, and demeanor (6). Far fewer colleague comments were made about time management (of the class session), learning objectives and class goals, effectiveness of class activities, or the quality of assignments.
As for what kind of feedback these faculty wanted, they “continued to select both students and peers as valuable sources of feedback, and this was true regardless of institution type” (8). In lieu of the usual end-of-course ratings, faculty identified more novel strategies such as mid-course evaluations, data about student learning, and alumni evaluations. They wanted to select their peer reviewers, opting for those with experience, those recognized as excellent teachers, those teaching similar courses, and colleagues with knowledge of evidence-based teaching strategies and educational research.
In their discussion of the results, these researchers explored the disconnect between current interests in active learning and evidence-based strategies, and the didactic nature of both course evaluation forms and peer feedback. Forms that ask questions about lecture skills create expectations that good teachers are supposed to lecture and not use activities that directly involve students in learning processes. Peers too regularly offer comments on presentations skills, rarely mentioning learning or assessment.
One of the questions underlying this research is the role of feedback in improving instruction. In this case, researchers were interested in feedback that moves faculty more toward evidence-based practices. “To support instructional change, faculty clearly need more than just knowledge of effective teaching strategies. They also need motivation, support, critical reflection and concrete suggestions for improvement” (10). That kind of feedback can come from students and peers, but it does not given current practices. The 1980s and early 1990s saw a plethora of studies exploring student ratings and the widespread adoption of course evaluation procedures. The use of both has remained pretty much unchanged since then. Given the lack of satisfaction with both but a continuing belief in their potential to improve instruction, this research makes it clear that the time for change has come.
These researchers point out that this was a survey of biology faculty, but they think the reaction of faculty in other STEM fields would be the same. Could we assume that they are the likely assessments of faculty across the board?
Reference: Brickman, P., Gormally, C., and Martella, A. M. 2016. “Making the Grade: Using Instructional Feedback and Evaluation to Inspire Evidence-Based Teaching. Cell Biology Education Winter: 1–14.