Improving Peer Feedback

Peer Feedback

Students regularly talk to one another about homework and course assignments. They discuss what they think the teacher wants, offer advice about what to study, and sometimes look at one another’s work and provide feedback. That feedback runs the gambit from generic commendations like, “that looks good,” to advice on comma placement, to detailed feedback on the substance or solution. Usually, the latter is the exception rather than the rule, unless students have learned that they can give and receive feedback in exchanges with peers. Many teachers try to provide that experience with in-class peer-review activities. They may give students checklists, question sets, or rubrics to guide their assessments and the feedback they then provide. The feedback may be written, or it may be exchanged online or in face-to-face conversations. But do these teacher interventions improve the peer feedback?  Can students learn to give one another feedback that enables them to improve their work?

Here’s a study that answers those questions with a resounding yes. The course was introductory calculus, and students provided one another feedback once a week on a “challenging, in-depth homework problem” (p. 4). Students worked on the problem at home, individually. After working on it, they completed a self-reflection. In class, they shared and discussed their solutions. Students then had the opportunity to revise their work before turning it in. That process alone improved students’ pass rate by 13 percent.

Of interest to researcher and instructor Daniel Reinholz were those conversations that took place between peers. What if attempts were made to improve the quality of the feedback they provided to one another?  What if students were given some training?  Would that change how students talked to one another about the problems?

To find out, Reinholz devised a training experience that took place once a week immediately after students turned in their final solutions to the challenging problems. They were given three sample solutions to one part of the problem that they just completed. They were asked to rate the quality of those solutions and explain how they could be improved. Students first wrote down their thoughts before engaging in a whole-class discussion.

A variety of data were collected to determine whether the conversations that students had with this training (Phase 2) were different from the conversation that took place when no training had occurred (Phase 1). Data included video observations, copies of student work and exams, audio recordings of their conversations (54 from Phase 1 and 86 from Phase 2), student surveys, and interviews with students about their experiences. These data were analyzed via a variety of methods as well, including content analysis of the conversations, qualitative analysis of a subset of the conversations, and student interviews.

Starting simply but dramatically, the length of the conversations in the training sections increased significantly from 351 words (SD = 173 words) in Phase 1 to 635 words (SD = 252 words) in Phase 2. The length nearly doubled. Using previous research, Reinholz considered three kinds of feedback: that which focused on the process, focused on the product, and was directed to the person. In Phase 2, students spent more time talking about process feedback. They used more question words—11.5 question words per conversation compared with 6.3 words per conversation without training. They used almost twice as many communication words such as explain, find, mean, read, tell, and understand in the training phase. Students also spent proportionally more time talking about the product in Phase 2, and they offered more feedback to the person, such as giving one another ideas about problem-solving in general. Finally, when students were trained, the course pass rate improved by 23 percent.

In sum, Reinholz reports, “the improved conversations consisted of much more on-topic talk and productive feedback; after training, students provided more feedback related to processes (communication and underlying reasoning) than product (correctness or incorrectness)” (p. 1). Interestingly, the training provided in this study was not advice on how to give good feedback. Rather, quality feedback was demonstrated through discussions of problems students had just completed. So students gained more exposure to the content as they were learning how to analyze and talk about challenging problems.

Reference: 

Reinholz, D. (2016). Peer conferences in calculus: The impact of systematic training. Assessment and Evaluation in Higher Education, 42 (1), 1—17.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Students regularly talk to one another about homework and course assignments. They discuss what they think the teacher wants, offer advice about what to study, and sometimes look at one another's work and provide feedback. That feedback runs the gambit from generic commendations like, “that looks good,” to advice on comma placement, to detailed feedback on the substance or solution. Usually, the latter is the exception rather than the rule, unless students have learned that they can give and receive feedback in exchanges with peers. Many teachers try to provide that experience with in-class peer-review activities. They may give students checklists, question sets, or rubrics to guide their assessments and the feedback they then provide. The feedback may be written, or it may be exchanged online or in face-to-face conversations. But do these teacher interventions improve the peer feedback?  Can students learn to give one another feedback that enables them to improve their work? Here's a study that answers those questions with a resounding yes. The course was introductory calculus, and students provided one another feedback once a week on a “challenging, in-depth homework problem” (p. 4). Students worked on the problem at home, individually. After working on it, they completed a self-reflection. In class, they shared and discussed their solutions. Students then had the opportunity to revise their work before turning it in. That process alone improved students' pass rate by 13 percent. Of interest to researcher and instructor Daniel Reinholz were those conversations that took place between peers. What if attempts were made to improve the quality of the feedback they provided to one another?  What if students were given some training?  Would that change how students talked to one another about the problems? To find out, Reinholz devised a training experience that took place once a week immediately after students turned in their final solutions to the challenging problems. They were given three sample solutions to one part of the problem that they just completed. They were asked to rate the quality of those solutions and explain how they could be improved. Students first wrote down their thoughts before engaging in a whole-class discussion. A variety of data were collected to determine whether the conversations that students had with this training (Phase 2) were different from the conversation that took place when no training had occurred (Phase 1). Data included video observations, copies of student work and exams, audio recordings of their conversations (54 from Phase 1 and 86 from Phase 2), student surveys, and interviews with students about their experiences. These data were analyzed via a variety of methods as well, including content analysis of the conversations, qualitative analysis of a subset of the conversations, and student interviews. Starting simply but dramatically, the length of the conversations in the training sections increased significantly from 351 words (SD = 173 words) in Phase 1 to 635 words (SD = 252 words) in Phase 2. The length nearly doubled. Using previous research, Reinholz considered three kinds of feedback: that which focused on the process, focused on the product, and was directed to the person. In Phase 2, students spent more time talking about process feedback. They used more question words—11.5 question words per conversation compared with 6.3 words per conversation without training. They used almost twice as many communication words such as explain, find, mean, read, tell, and understand in the training phase. Students also spent proportionally more time talking about the product in Phase 2, and they offered more feedback to the person, such as giving one another ideas about problem-solving in general. Finally, when students were trained, the course pass rate improved by 23 percent. In sum, Reinholz reports, “the improved conversations consisted of much more on-topic talk and productive feedback; after training, students provided more feedback related to processes (communication and underlying reasoning) than product (correctness or incorrectness)” (p. 1). Interestingly, the training provided in this study was not advice on how to give good feedback. Rather, quality feedback was demonstrated through discussions of problems students had just completed. So students gained more exposure to the content as they were learning how to analyze and talk about challenging problems. Reference:  Reinholz, D. (2016). Peer conferences in calculus: The impact of systematic training. Assessment and Evaluation in Higher Education, 42 (1), 1—17.