It’s a feedback mechanism that’s been around for some time. Most often used during a course, students are asked to fold a sheet of paper in thirds and label the columns stop, start, and continue. Then they are asked to identify aspects of instruction that the teacher should stop doing, things they’d like the teacher to consider doing, and those practices they’d recommend that the teacher continue doing. What they identify in each area should be instructional policies, practices, or behaviors that relate to their learning experiences in the course.
A group of faculty researchers was interested in whether using a structured format like this improved the quality of feedback students provided. To answer that question, they conducted two studies. In the first one they solicited student feedback in three courses: a graduate, entry-level medical course, an undergraduate science course, and an undergraduate arts and humanities course. Students in the medical course provided feedback using a free text option. They were invited to comment on any aspect of instruction. Students in the other two courses used the stop, start, continue response.
To ascertain quality differences in the feedback, the comments students provided were coded using two criteria. First, was the comment positive or negative, and second, more important, how in depth was the comment? Was it a very basic, general comment, like using the descriptor “excellent” or “good”? Or, did the comment indicate why something was or was not good? For example, it qualified the comment, “Good use of questioning” or “Lots of boring topics covered in lecture.” And finally, did the comment contain a constructive suggestion for change, or was there a clear inference about what needed to be changed? “Spoke way too fast in lecture.” “The activity would have worked better in smaller groups.”
Students who used the open-ended feedback option made more individual comments. However, in terms of quality, their feedback mostly fell in the simple descriptive category. That was in contrast to students in the other two courses who made fewer comments but who mostly made comments in the second and third, qualified and constructive, categories. “Taken together, these finding suggest that the specificity of questions asked in evaluation forms may influence the constructiveness of the feedback obtained.” (p. 761)
But the researchers worried that maybe the higher quality feedback was simply a function of the different courses and the fact that students in the first study were from two different institutions. In the second study, they used one course, the entry-level medical course. A second cohort of students was asked to use the stop, start, and continue structure to provide instructor feedback. Then using the same coding system, they compared the feedback provided by this cohort with that obtained from the first medical course group.
The results in the second study mirrored those in the first. With the unstructured feedback format, over 44 percent of the comments were in the descriptive category. With the stop, start, and continue format, 50 percent of the comments were in the constructive feedback category, the highest in-depth category.
The results offer confirmation of an old adage: “The quality of the questions determines the quality of the answer.” It’s an adage often forgotten when asking students open-ended questions about their experiences in a course. “What did you like most/least about the course?” questions are too open-ended. They invite students to comment where they will, and that implicitly encourages students to respond to aspects of instruction that have nothing to do with their learning experiences in the course.
The researchers do point out that they used a simple, straightforward mechanism for assessing the quality of comments. Not all qualified or constructive comments focus on equally important aspects of instruction. “The handouts should have been provided before class” is constructive but not nearly as important as a comment such as “The homework problems didn’t seem to match with the problems the instructor did in class.”
Structured feedback, like that using stop, start, and continue, tends to clarify the impact of specific aspects of instruction. It also makes students responsible for suggesting alternatives. Quality input from students increases the chance that faculty will act on the feedback. It doesn’t mean always doing everything students recommend, but it can motivate faculty to try alternatives they may not have considered. By directing student responses, structured feedback also teaches students some of the principles of constructive feedback.
Reference: Hoon, A., Oliver, E., Szpakowska, K., and Newton, P., (2015). Use of the Stop, Start, and Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40 (5), 755–767.