Student Ratings of Instruction: What about Those Written Comments?

Student Ratings of Instruction

Many faculty dread that end-of-semester email informing them that student evaluation feedback is available for viewing. Even instructors who routinely excel on numerical ratings find themselves struggling with students’ responses to open-ended questions, where inevitably one remark stands out in its harsh tone or offensive content. Sadly, the phenomenon of negativity bias—the tendency to focus more on negative than positive stimuli—ensures that faculty will be most likely to remember that particularly mean-spirited comment.

In a recent article in Studies in Educational Evaluation, Linse summarized the extensive literature on student ratings, offering helpful information and advice for the interpretation of quantitative items. While less is known about written comments, Linse noted that they are often the source of unfair attention from both faculty and administrators.

Is there a way to build some perspective regarding written comments? Short of ignoring them altogether, here are some suggestions for faculty eager to minimize the emotional distress they can create.

1. Calculate participation rates. Students who complete the quantitative ratings section of evaluations may or may not opt to provide written feedback, which can result in low response rates. In four studies conducted in the past decade, the percentage of students who answered one or more open-ended questions ranged from 39 to 67 percent. Consider a course of 30 students: If 55 percent complete the overall course evaluation (a typical rate at U.S. universities), and 60 percent of those provide written comments, just 10 students (one-third) are now representing the entire class.

Before reading any of the comments, count the number of responses for each item, divide by the total number of students in the class, and write the corresponding percentage next to each question. Keep this participation rate in mind to maintain some perspective as you dive into the prose.

2. Organize responses by student or by other relevant variables. Most course evaluation summaries present the written comments by item rather than by student. While helpful for spotting trends, this format prevents instructors from discerning, for example, whether a highly critical response to four different items comes from one disgruntled student or four different students. You may also choose to organize written comments by variables assessed in the numerical items that are potentially associated with overall evaluation trends, such as majors vs. nonmajors, expected grade, perceived workload, or how much time students report devoting to the course.

3. Solicit an unbiased review. Qualitative researchers note the importance of “perspective management” in data interpretation, with investigators urged to recognize and minimize their own biases. Try giving written comments to a trusted colleague; when written comments aren’t about one’s own teaching, it’s easier to look at them objectively. Or share comments with professionals in your campus’s faculty development center. Having encountered a broad sample of written comments over time, they can help you place yours in context.

4. Perform a content analysis. Another option, also borrowed from qualitative research (which is, after all, what we are doing when reading written comments), is to perform a systematic evaluation of the responses using the procedures of a standard qualitative analysis. Focus on representative rather than unique comments, and try to identify themes or categorize around standard areas of teaching emphasis (e.g., organization, feedback, accessibility). Combine written comments across several courses and semesters for a richer and more representative pool of data.

5. Consider tossing the outliers. Why not delete, or at least ignore, the nonrepresentative responses, a common practice in quantitative research analyses? In a 2011 article in Quality Assurance in Education, Wongsurawat offers an interesting strategy. Assess every comment for its reliability and representativeness, based on correlations between the individual comment and the class averages. Similar to the outcome of item analyses performed in test construction, responses that do not correlate can be discarded. Not all comments merit scrutiny.

6. Add your own questions. There’s an argument that open-ended items explicitly directing students to identify faculty weaknesses or strategies for improvement encourage negative feedback that would not be elicited with broader prompts. In any case, you can probably do better. What do you want to know, and what would actually help you to improve? Most course evaluation forms permit adding supplementary questions.

7. Instruct your students in the art of giving feedback. Providing written comments can be a teachable moment. Talk with students about the type of responses that are helpful and that motivate teachers to make changes. In a study published in 2014 in Higher Education, Tucker found that when students were explicitly instructed in how to provide constructive, professional feedback, the rate of abusive or unprofessional comments was less than 1 percent.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

Many faculty dread that end-of-semester email informing them that student evaluation feedback is available for viewing. Even instructors who routinely excel on numerical ratings find themselves struggling with students' responses to open-ended questions, where inevitably one remark stands out in its harsh tone or offensive content. Sadly, the phenomenon of negativity bias—the tendency to focus more on negative than positive stimuli—ensures that faculty will be most likely to remember that particularly mean-spirited comment.

In a recent article in Studies in Educational Evaluation, Linse summarized the extensive literature on student ratings, offering helpful information and advice for the interpretation of quantitative items. While less is known about written comments, Linse noted that they are often the source of unfair attention from both faculty and administrators.

Is there a way to build some perspective regarding written comments? Short of ignoring them altogether, here are some suggestions for faculty eager to minimize the emotional distress they can create.

1. Calculate participation rates. Students who complete the quantitative ratings section of evaluations may or may not opt to provide written feedback, which can result in low response rates. In four studies conducted in the past decade, the percentage of students who answered one or more open-ended questions ranged from 39 to 67 percent. Consider a course of 30 students: If 55 percent complete the overall course evaluation (a typical rate at U.S. universities), and 60 percent of those provide written comments, just 10 students (one-third) are now representing the entire class.

Before reading any of the comments, count the number of responses for each item, divide by the total number of students in the class, and write the corresponding percentage next to each question. Keep this participation rate in mind to maintain some perspective as you dive into the prose.

2. Organize responses by student or by other relevant variables. Most course evaluation summaries present the written comments by item rather than by student. While helpful for spotting trends, this format prevents instructors from discerning, for example, whether a highly critical response to four different items comes from one disgruntled student or four different students. You may also choose to organize written comments by variables assessed in the numerical items that are potentially associated with overall evaluation trends, such as majors vs. nonmajors, expected grade, perceived workload, or how much time students report devoting to the course.

3. Solicit an unbiased review. Qualitative researchers note the importance of “perspective management” in data interpretation, with investigators urged to recognize and minimize their own biases. Try giving written comments to a trusted colleague; when written comments aren't about one's own teaching, it's easier to look at them objectively. Or share comments with professionals in your campus's faculty development center. Having encountered a broad sample of written comments over time, they can help you place yours in context.

4. Perform a content analysis. Another option, also borrowed from qualitative research (which is, after all, what we are doing when reading written comments), is to perform a systematic evaluation of the responses using the procedures of a standard qualitative analysis. Focus on representative rather than unique comments, and try to identify themes or categorize around standard areas of teaching emphasis (e.g., organization, feedback, accessibility). Combine written comments across several courses and semesters for a richer and more representative pool of data.

5. Consider tossing the outliers. Why not delete, or at least ignore, the nonrepresentative responses, a common practice in quantitative research analyses? In a 2011 article in Quality Assurance in Education, Wongsurawat offers an interesting strategy. Assess every comment for its reliability and representativeness, based on correlations between the individual comment and the class averages. Similar to the outcome of item analyses performed in test construction, responses that do not correlate can be discarded. Not all comments merit scrutiny.

6. Add your own questions. There's an argument that open-ended items explicitly directing students to identify faculty weaknesses or strategies for improvement encourage negative feedback that would not be elicited with broader prompts. In any case, you can probably do better. What do you want to know, and what would actually help you to improve? Most course evaluation forms permit adding supplementary questions.

7. Instruct your students in the art of giving feedback. Providing written comments can be a teachable moment. Talk with students about the type of responses that are helpful and that motivate teachers to make changes. In a study published in 2014 in Higher Education, Tucker found that when students were explicitly instructed in how to provide constructive, professional feedback, the rate of abusive or unprofessional comments was less than 1 percent.