“What Were You Thinking of When You Decided on That Rating?”

Most student rating instruments include a question related to the feedback provided by the instructor. It may ask whether it was constructive, actionable, delivered in a timely manner, or some combination of these characteristics. Most teachers are conscientious about giving students feedback. Because they devote so much time and effort to providing it, they are often disappointed and frustrated when students don’t rate the quality of the feedback very positively.

That’s what was happening in the faculties of arts and social sciences and of law at the University of New South Wales. The question on their student rating form asked students whether they were given helpful feedback on how they were doing in the course. “Members of the staff [faculty] whose courses have been rated lower on feedback than on other factors have been puzzled as to just what it was that they would have to do in order to score really well on the feedback question.” (p. 50)

Article author Shirley V.  Scott conducted a series of focus group conversations with students in these two programs. Her approach was direct. She gave students a copy of the question from the student rating form, asked them to think of a course they were enrolled in now and a course they had already completed, and rate both on the feedback question. Then she asked them to reflect and write about what aspects of those courses shaped their answer to the feedback question. “What were you thinking of when you decided how to rate that course?” (p. 51)

Her follow-up question is an excellent one, likely to generate answers that help the teacher and the students. When deciding how to rate an aspect of instruction, students read the item, and then events, actions, experiences, behaviors, and feelings rush through the mind. Without much conscious integration, they coalesce into a score. Thinking about them explicitly clarifies the rationale behind the score for both the student and teacher.

In this case, the written responses and a follow-up discussion were used to formulate a definition for feedback, although it was clear that not all students were defining it the same way. About a third of them thought of feedback exclusively in terms of the teacher’s response to assignments. The rest of the students defined it more broadly, including features like the teacher’s nonverbal responses.

Both perspectives were incorporated in the definition: feedback is what students use to gauge throughout the course how well they are doing in terms of the knowledge, understanding, and skills that will be used to determine their overall grade in the course. (p. 52) Author Scott believes this understanding of feedback explains why an abundance of teacher feedback still may not result in a high score on the rating item. If some comments on a paper are positive, some negative, and some neutral, that’s good feedback, but from the student perspective it may not clarify how they are doing in the course.

This need to know “how am I doing?” more or less continually may be a feature of this generation of students. It could also be indicative of learners not confident or able to self-assess. They can’t decide how they’re doing, are afraid their assessment is incorrect, or may believe that what they think doesn’t matter, since their understanding of how they’re doing doesn’t count. Author Scott Shirely recommends helping these students by giving them exemplars so they have a better understanding of what they’re aiming to achieve.

Feedback doesn’t have an agreed-upon definition among scholars, among those who teach, or, as this analysis shows, among students. What was discovered here is specific to students and faculty in two programs at one institution. The most valuable part of the analysis is the approach used to discover what students were thinking when they came up with a particular rating score. It’s a technique that could easily become an activity in any course, and it seems like a useful way to gain insight about scores on any evaluation item where the student rating isn’t what was expected or doesn’t seem to make much sense. 

Reference: Scott, S.V., (2014). Practicing what we preach: Towards a student-centered definition of feedback. Teaching in Higher Education, 19 (1), 49-57.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Most student rating instruments include a question related to the feedback provided by the instructor. It may ask whether it was constructive, actionable, delivered in a timely manner, or some combination of these characteristics. Most teachers are conscientious about giving students feedback. Because they devote so much time and effort to providing it, they are often disappointed and frustrated when students don't rate the quality of the feedback very positively. That's what was happening in the faculties of arts and social sciences and of law at the University of New South Wales. The question on their student rating form asked students whether they were given helpful feedback on how they were doing in the course. “Members of the staff [faculty] whose courses have been rated lower on feedback than on other factors have been puzzled as to just what it was that they would have to do in order to score really well on the feedback question.” (p. 50) Article author Shirley V.  Scott conducted a series of focus group conversations with students in these two programs. Her approach was direct. She gave students a copy of the question from the student rating form, asked them to think of a course they were enrolled in now and a course they had already completed, and rate both on the feedback question. Then she asked them to reflect and write about what aspects of those courses shaped their answer to the feedback question. “What were you thinking of when you decided how to rate that course?” (p. 51) Her follow-up question is an excellent one, likely to generate answers that help the teacher and the students. When deciding how to rate an aspect of instruction, students read the item, and then events, actions, experiences, behaviors, and feelings rush through the mind. Without much conscious integration, they coalesce into a score. Thinking about them explicitly clarifies the rationale behind the score for both the student and teacher. In this case, the written responses and a follow-up discussion were used to formulate a definition for feedback, although it was clear that not all students were defining it the same way. About a third of them thought of feedback exclusively in terms of the teacher's response to assignments. The rest of the students defined it more broadly, including features like the teacher's nonverbal responses. Both perspectives were incorporated in the definition: feedback is what students use to gauge throughout the course how well they are doing in terms of the knowledge, understanding, and skills that will be used to determine their overall grade in the course. (p. 52) Author Scott believes this understanding of feedback explains why an abundance of teacher feedback still may not result in a high score on the rating item. If some comments on a paper are positive, some negative, and some neutral, that's good feedback, but from the student perspective it may not clarify how they are doing in the course. This need to know “how am I doing?” more or less continually may be a feature of this generation of students. It could also be indicative of learners not confident or able to self-assess. They can't decide how they're doing, are afraid their assessment is incorrect, or may believe that what they think doesn't matter, since their understanding of how they're doing doesn't count. Author Scott Shirely recommends helping these students by giving them exemplars so they have a better understanding of what they're aiming to achieve. Feedback doesn't have an agreed-upon definition among scholars, among those who teach, or, as this analysis shows, among students. What was discovered here is specific to students and faculty in two programs at one institution. The most valuable part of the analysis is the approach used to discover what students were thinking when they came up with a particular rating score. It's a technique that could easily become an activity in any course, and it seems like a useful way to gain insight about scores on any evaluation item where the student rating isn't what was expected or doesn't seem to make much sense.  Reference: Scott, S.V., (2014). Practicing what we preach: Towards a student-centered definition of feedback. Teaching in Higher Education, 19 (1), 49-57.