OK, everyone. As I’ve mentioned a few times this semester, we’re going to spend 40 minutes today discussing bias in student evaluations.
You’ve already completed evals for our class (thanks!), but soon your other instructors will begin their end-of-semester campaigns. Incentivizing. Politely reminding. Cajoling. Then hounding.
Ever wonder why we’re so insistent on obtaining your feedback? [“Because you want to improve your courses?”] That’s right! Teaching is a tree-falls-in-the-forest kind of profession: if you’re not learning, then—by definition—we’re not teaching. We can prep, design, scaffold, reflect, demonstrate, and assess all we want, but without your firsthand accounts, our teaching remains largely theoretical. Like an engineer who designs roller coasters but never rides them. And because we care deeply about our disciplines and your growth as critical thinkers, compassionate human beings, and responsible citizens, we want to be good at our jobs. Your voices matter, and we take your feedback seriously.
Now, in the past, this was the moment when I’d move on to a different topic. And many believe that the conversation we’re about to have is inappropriate. Coercive. Out of bounds. Nevertheless, I’ve integrated a conversation about student evaluations of teaching (SETs) into my course content because I think it’s the right thing to do.
Why? You’ll complete evaluations nearly 40 times before you graduate, so I think it’s only right that we teach you how to complete them well. You deserve some context, a sense of the discourse that surrounds SETs. And that discourse is quite nuanced, complicated, and controversial. So controversial, in fact, that some institutions have done away with them altogether. So controversial that an arbitrator in Ontario recently ruled that student evaluations were too flawed to use in employment decisions like tenure and promotion.
The truth is, a lot of really smart people disagree about almost everything regarding SETs. Especially the degree to which they exhibit measurement bias (how well students can assess effective teaching) and equity bias (how much factors outside the teacher’s control affect students’ ratings). Research suggests that these biases may affect both the quantitative and qualitative scores that students assign to their instructors. In addition, there’s a phenomenon called the “online disinhibition effect” in which anonymity emboldens individuals to do or say things that they’d never do or say in real life. A similar effect can lead to some pretty harmful comments on evals, especially from students who are disgruntled about the low grade they expect to earn in an instructor’s class.
Now, none of this means that you should be anything but honest on your course evaluations. My goal here is not to guilt you into completing evals in certain ways or deter you from completing them at all. I’m not going to remind you that professors have feelings or bring you cookies to sweeten you up (which actually works, by the way). Your teachers are professionals, and it would be wildly inappropriate for you to care about SETs as much as we do. But utter cluelessness is not really what we aim for around here, especially about this end-of-semester ritual that’s baked into your college experience.
So here goes. Let’s dive into the assigned readings for today. But first, let’s make a quick detour to our course’s learning objectives. [Good-natured eye rolls.] I know, I know. I’m a broken record: everything must connect to our objectives. But seriously, look at number three: “Students will comprehend, summarize, engage with, and produce persuasive arguments in a variety of modes.” How perfect! Your tasks for today were to read and annotate two texts—Stanford University’s “Course Feedback as a Measure of Teaching Effectiveness” and Kevin Gannon’s “In Defense (Sort of) of Student Evaluations of Teaching”—and generate two lists: one for arguments in favor of evals and one for arguments against them. Please take out your lists (or pull them up on your phone or laptop) and spend five minutes talking with your neighbor about them. Pay particular attention to any points of agreement or disagreement. [The classroom buzzes as students engage in lively dialogues with their partners; I walk around the room to eavesdrop and keep them on track.]
OK, let’s come back together and share with the class. I’ll track our discussion on the whiteboard as we go. Which arguments did you find most and least persuasive? [Ten-minute conversation ensues.]
It’s no surprise, of course, that biases show up in evaluations: you’re biased, I’m biased, we’re all biased. We are, after all, humans with ruthlessly efficient brains. And we are products of a culture in which deep-seated inequities and prejudices persist.
But here’s the thing: I’d rather not settle for your brains’ default settings. Like the cognitive wrappers that we complete after assessments, much of our work together has encouraged you to think about your own thinking. Studying literature is part of this project, too, because it helps us see the world through someone else’s eyes; interrogate our own privileges; understand the forces that shape our identities; and confront our biases, both implicit and explicit.
The good news? You’ve learned skills this semester that can help you complete your evaluations thoughtfully, thoroughly, and in ways that will prove beneficial to your instructors. Rhetorical analysis. Feedback techniques. Revision. Peer review. How to avoid cognitive and logical fallacies. (These competencies are all referenced in learning objectives five and six, by the way. I know. I just can’t help myself.) And take a look at our institutional evaluation form, which I’ve included in your course pack. The questions in the qualitative, open-ended section look a bit like . . . writing prompts, right? You’ve got this.
So, let’s begin where we always begin. Remember that groan-inducing joke I’ve been making all semester? Writers who write in vacuums usually . . . ? [“Suck.”] That’s right. When we’re faced with a writing task, what do we always analyze first? [“The rhetorical context.”] Exactly. You know the questions to ask; I’ll answer them as best I can.
[“What’s the primary purpose of this text?”] To help instructors improve their teaching. [“Secondary purpose?”] To guide hiring, promotion, and staffing decisions. [“Who’s the primary audience?”] Your instructor. [“Is your audience receptive or resistant?”] Generally receptive, although somewhat less so if the feedback is negative. [“What types of evidence will be most persuasive?”] Specific, actionable feedback about aspects of the course that are within the instructor’s control. I recommend using the same describe-evaluate-suggest format that we’ve applied to peer writing this semester. [“How can I bolster my credibility?”] By limiting your comments to what the instructor does rather than who she is; references to your teacher’s appearance, race, gender, or ethnicity should be omitted. [“What’s at stake?”] Quite a bit. Honestly, if your instructors were the only ones who read your feedback—like the midterm evals I had you complete—we wouldn’t even be having this conversation.
But they’re not. [“Who else?”] Senior administrators. Department chairs. Colleagues. Tenure and promotion committees. Ay, there’s the rub. It’s hard to dismiss SETs’ bias as the cost of doing business when your livelihood largely depends on these scores. One of the reasons I’m having this conversation with you, in fact, is so my more vulnerable colleagues—part-time and untenured faculty—don’t have to.
So how can you make your feedback as unbiased and helpful as possible? Well, one of the most exciting projects on this topic comes from a peer-to-peer partnership at the University of California-Mercer in which student interns train their classmates to give constructive feedback on course evaluations. As you watch this short video, please jot down the specific recommendations that these students make; these recommendations will form a rubric when we analyze the handout that I just passed around.
This handout lists a dozen or so student comments that I’ve received over the years. Please add a plus sign next to the items that align most closely with your rubric; add a minus sign next to the items that don’t.
OK, let’s discuss. Plus signs? [“The documentary we watched on The Canterbury Tales helped me understand Middle English pronunciation.” “Attending Hamlet at the St. Louis Rep made the play come alive for me.”]. Minus signs? [“I hate Macbeth.” “I thought it was really unprofessional when she wore a ‘Get out and vote’ T-shirt to class on election day.” “Her pants didn’t go all the way to the floor.” “None—Dr. DeWall ROCKS!”] Good. How could we turn some of these minus signs into plus signs? [“Give specific examples of what Dr. DeWall does to make the class rock; elaborate upon why the student hates Macbeth—unless it’s something that Dr. DeWall can’t control, like a student’s phobia of witches.”]
Nice work! We have about 20 minutes of class time left. Why don’t you get started on the course evals for your other classes? If you’re on your phones, I’ve projected a QR code that will take you to our online assessment website; the link’s on my syllabus too.
OK, everyone—good work today! See you next time.
Nichole DeWall, PhD, is a professor of English at McKendree University in Lebanon, Illinois, where she teaches Shakespeare, medieval and early modern literature, drama, and composition courses.