A Collaborative Midterm Student Evaluation

mid-semester feedback

Can students collaborate on the feedback they provide faculty? How would that kind of input be collected? Both are legitimate questions, and both were answered by a group of marketing faculty who developed, implemented, and assessed the approach.

The first argument, supported by research cited in their article, establishes the value of collecting midterm feedback from students. Students tend to take the activity more seriously because they still have a vested interest in the course. The teachers have the rest of the course to make changes that could potentially improve their learning experiences. There’s also research that documents when midcourse feedback is collected and the results are discussed with students, end-of-course ratings improve. And they don’t improve because teachers are doing everything students recommend—sometimes a policy doesn’t need to be changed so much as it needs to be better explained.

The faculty involved in this project reasoned that having students collaborate on feedback for the instructor might have several advantages. It could increase student engagement with the process. Almost across the board now, there are concerns about the low response rates generated by online course evaluations. In addition, students don’t generally put much effort into the feedback they provide. In one study cited in the article, students self-reported taking an average of 2.5 minutes to complete their evaluations. Because doing an evaluation collaboratively was unique and happened midcourse, faculty thought that maybe students would get more involved in the process.

They also wondered if the quality of the feedback might be improved by the interactive exchange required to complete it. And along with that, they thought the process could increase students’ feelings of accountability by virtue of providing feedback in a public venue. Perhaps it would be harder for students to get away with making highly critical, personal comments.

To test all these possibilities, the instructors used the fairly common STOP, START, CONTINUE feedback mechanism in which students are asked to identify what, if anything, the instructor is doing that interferes with learning, what the instructor might do to improve learning, and what the instructor is doing that helps learning. The benefit of using a form like this is that “it specifically requests developmental feedback as opposed to judgmental feedback” (p. 159). It directs students to identify specific things the instructor is or isn’t doing.

The faculty research team collected feedback via these prompts plus one additional open-ended query for other comments in multiple sections of several different marketing courses. In each class, half the students provided feedback on these questions via a paper-and-pencil format. The other half of the students went to a computer lab and provided the feedback in small groups using Google Docs. Any comment students made was visible to the others in the group so that students could answer the questions and comment on other students’ comments. After completing either the paper-and-pencil version or the online collaborative one, students were asked to evaluate the evaluation.

It’s definitely a novel approach, and the first time the instructors tried it they discovered they had not fully prepared students. Many of them reported being confused. Also, perhaps because students don’t take course evaluations seriously and aren’t always constructive in their feedback, some students did not take this process seriously and offered irrelevant comments.

The second time, after having better prepared students to work collaboratively on Google Docs, more students took the task seriously. Results from the evaluation survey showed that “students evaluated the collaborative evaluation significantly higher on three measures: ‘easier to complete,’ ‘enjoyed completing the evaluation,’ and ‘could provide useful feedback’” (p. 162).

Both the paper-and-pencil and collaborative approaches produced useful information, with each having distinct advantages. The responses provided by individuals on the paper-and-pencil form were not influenced by what others in the group thought. They were also completed quickly. The collaborative approach captured the advantages of group synergy as can be seen on the examples included in the article. Students created discussion threads in which they responded to each other’s comments, agreeing, elaborating, and sometimes raising related issues.

But perhaps most compelling of all, both ways of collecting midcourse feedback demonstrated its formative value to the instructor, and the article contains multiple examples that illustrate this. In both cases over 70 percent of the students’ comments were “actional,” meaning something a teacher could do something about. Sometimes the action was simple, like suggesting the instructor not “whip” back or “fling” papers out when returning them. The instructor was simply trying to get the papers back quickly and was surprised to learn this was how students perceived the action. She now returns papers more “gently.” More serious were complaints about the grading of SPSS projects. The instructor reviewed criteria in class and corrected SPSS program errors with students; as a result, their assignments and grades improved.

It’s an affirming article that shows how students can be guided to provide feedback that improves instruction and, in the process, learn something about delivering it constructively.

Reference: Veeck, A., O’Reilly, K., MacMillan, A., and Yu, H., (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38 (3), 157–169.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon