Better Feedback: More Instructional Change?

A thoroughly referenced article seeks to answer why science faculty members are slow to adopt evidence-based teaching practices, despite what the authors describe as “heroic dissemination” of information on these practices. The folks on the science side of the house have evidence that use of these practices is still anything but widespread. Would the evidence be any different for those teaching in other fields? It’s doubtful.

The authors explore a number of reasons for this failure, most of them having to do with the common strategies currently being used to improve instruction. Start with workshops. After citing a number of studies, these authors conclude, “Collectively, this work suggests that one-time workshops raise awareness of evidence-based teaching strategies but are not sufficient for faculty to adopt and successfully use these strategies.” (p. 187)

Then there are problems with student evaluations. The authors include a reference to survey data in which 96 percent of the faculty reported that they wanted more meaningful instructional feedback. And they note, “Items on student evaluations typically focus on student satisfaction and didactic teaching, rather than measuring learning. … [A]s the sole measure of teaching effectiveness or as an impetus to increase active learning in the college classroom, student evaluations are far from adequate.” (p. 188)

And there are several issues related to peer evaluations. “One-time observations have been shown to have virtually no impact on faculty teaching, aside from textbook selection and may lead to erroneous inferences.” (pp. 188-189) As peer observation is practiced at most institutions, it is a summative assessment with results used for promotion and tenure purposes, not improvement.

And what do the authors propose as ways to increase the use of evidence-based practices? “We argue that providing faculty with formative teaching feedback may be the single most under-appreciated factor in enhancing science education reform efforts.” (p. 188) Based on research in organizational psychology and studies from K-12 teacher education and workplace performance, they identify three characteristics of feedback they believe would help faculty more successfully implement instructional change.

Effective feedback clarifies the task in a specific, timely manner, with a consistent message that informs recipients how to improve. Generic feedback such as, “Be more organized” is less useful than feedback that focuses on behaviors and proposes specific strategies, in this case that could be used to convey structure and coherence. And feedback needs to be timely, not delivered weeks after an observation or months after students offered their responses. Consistent messages reinforce personal agency and cause those receiving the feedback to raise their expectations for future performance.

Effective feedback encourages the instructor, improving motivation and stimulating increased effort. Issues here involve the tone of the feedback and context in which it is given. This doesn’t mean the feedback addresses only what the teacher is doing well. “Feedback should be positively framed but not generically positive.” (p. 191) Also important here is consideration of the confidence and experience of the teacher receiving the feedback. Most new teachers are less confident and can be damaged more easily by too much direct, negative feedback. The goal of feedback is to motivate faculty to change, and feedback that makes new and experienced faculty believe they can successfully incorporate new approaches stands a better chance of motivating them than does excessively critical commentary.

Feedback is more likely to be sought if the potential benefit outweighs the costs. New faculty members generally have more reasons to improve their teaching than do senior faculty members. “They [senior faculty] are not likely to gain status as a result of improving their teaching, so the cost to their self-image may be too great to warrant voluntarily seeking feedback from peers.” (p. 192) All teachers are more likely to seek feedback from sources they consider credible, and students completing multiple rating forms, online, at the end of the course usually don’t fall into that category. Students can provide credible feedback, but they must be asked the right questions, at the right time, and the quality of the feedback they provide is strongly influenced by how faculty respond to what they offer. Central to understanding this feedback characteristic is recognition of the big difference between wanting to improve and having to. When faculty feel “forced” to change, the costs of having to do so outweigh the benefits, and that compromises the effectiveness of those changes they may try to make.

This is one of those truly outstanding pieces of scholarship, tremendously useful to anyone interested in instructional improvement—their own or that of colleagues in a department or across an institution. The frame of reference here is the STEM fields, but the actual audience is much larger. “The efforts on the part of STEM instructors to reform instruction and shift the status quo closer to evidence-based teaching practices are heroic and ongoing, but we must match these efforts with improved instructional feedback.” (p. 195)

Reference: Gormally, C., Evans, M., and Brickman, P. (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education—Life Sciences Education, 13 (Summer), 187-199.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

A thoroughly referenced article seeks to answer why science faculty members are slow to adopt evidence-based teaching practices, despite what the authors describe as “heroic dissemination” of information on these practices. The folks on the science side of the house have evidence that use of these practices is still anything but widespread. Would the evidence be any different for those teaching in other fields? It's doubtful.

The authors explore a number of reasons for this failure, most of them having to do with the common strategies currently being used to improve instruction. Start with workshops. After citing a number of studies, these authors conclude, “Collectively, this work suggests that one-time workshops raise awareness of evidence-based teaching strategies but are not sufficient for faculty to adopt and successfully use these strategies.” (p. 187)

Then there are problems with student evaluations. The authors include a reference to survey data in which 96 percent of the faculty reported that they wanted more meaningful instructional feedback. And they note, “Items on student evaluations typically focus on student satisfaction and didactic teaching, rather than measuring learning. … [A]s the sole measure of teaching effectiveness or as an impetus to increase active learning in the college classroom, student evaluations are far from adequate.” (p. 188)

And there are several issues related to peer evaluations. “One-time observations have been shown to have virtually no impact on faculty teaching, aside from textbook selection and may lead to erroneous inferences.” (pp. 188-189) As peer observation is practiced at most institutions, it is a summative assessment with results used for promotion and tenure purposes, not improvement.

And what do the authors propose as ways to increase the use of evidence-based practices? “We argue that providing faculty with formative teaching feedback may be the single most under-appreciated factor in enhancing science education reform efforts.” (p. 188) Based on research in organizational psychology and studies from K-12 teacher education and workplace performance, they identify three characteristics of feedback they believe would help faculty more successfully implement instructional change.

Effective feedback clarifies the task in a specific, timely manner, with a consistent message that informs recipients how to improve.
Generic feedback such as, “Be more organized” is less useful than feedback that focuses on behaviors and proposes specific strategies, in this case that could be used to convey structure and coherence. And feedback needs to be timely, not delivered weeks after an observation or months after students offered their responses. Consistent messages reinforce personal agency and cause those receiving the feedback to raise their expectations for future performance.

Effective feedback encourages the instructor, improving motivation and stimulating increased effort.
Issues here involve the tone of the feedback and context in which it is given. This doesn't mean the feedback addresses only what the teacher is doing well. “Feedback should be positively framed but not generically positive.” (p. 191) Also important here is consideration of the confidence and experience of the teacher receiving the feedback. Most new teachers are less confident and can be damaged more easily by too much direct, negative feedback. The goal of feedback is to motivate faculty to change, and feedback that makes new and experienced faculty believe they can successfully incorporate new approaches stands a better chance of motivating them than does excessively critical commentary.

Feedback is more likely to be sought if the potential benefit outweighs the costs.
New faculty members generally have more reasons to improve their teaching than do senior faculty members. “They [senior faculty] are not likely to gain status as a result of improving their teaching, so the cost to their self-image may be too great to warrant voluntarily seeking feedback from peers.” (p. 192) All teachers are more likely to seek feedback from sources they consider credible, and students completing multiple rating forms, online, at the end of the course usually don't fall into that category. Students can provide credible feedback, but they must be asked the right questions, at the right time, and the quality of the feedback they provide is strongly influenced by how faculty respond to what they offer. Central to understanding this feedback characteristic is recognition of the big difference between wanting to improve and having to. When faculty feel “forced” to change, the costs of having to do so outweigh the benefits, and that compromises the effectiveness of those changes they may try to make.

This is one of those truly outstanding pieces of scholarship, tremendously useful to anyone interested in instructional improvement—their own or that of colleagues in a department or across an institution. The frame of reference here is the STEM fields, but the actual audience is much larger. “The efforts on the part of STEM instructors to reform instruction and shift the status quo closer to evidence-based teaching practices are heroic and ongoing, but we must match these efforts with improved instructional feedback.” (p. 195)

Reference: Gormally, C., Evans, M., and Brickman, P. (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education—Life Sciences Education, 13 (Summer), 187-199.