Student Peer Review and Learning

Credit: iStock.com/Drazen Zigic
Credit: iStock.com/Drazen Zigic

Sometimes it’s good to step back and take a look at something from a distance. Meta-analyses provide some of that perspective. They take a bundle of individual studies, combine their findings, and offer an empirical view of a phenomenon—in this instance, students reviewing their peers. A recent analysis inquired about the general effects of peer review on learning and the factors that influence those effects. Do the many benefits proposed actually accrue when students exchange feedback with their peers? The researchers note that an array of individual studies offer mixed results.

For Those Who Teach from Maryellen Weimer

Those uneven results aren’t surprising given the diversity of what counts as peer review—for instance, exchanging feedback over written work, evaluating the contributions of group members, critiquing performances of various sorts. The researchers classified these variables as the peer review setting. Still more variation surrounds the logistical details associated with the use of peer review—what the researchers classified as its assessment mode. Does the peer feedback come in the form of ratings, comments, or both? Are criteria involved? Is more than one assessment made? And finally, what about feedback reciprocity (it’s given and received)? Are the reviewers anonymous? Is there more than one reviewer?

Like most complex phenomena, student peer review has spawned a plethora of studies. The researchers rounded up 350 published between 1950 and 2017. Only 58 met their criteria for inclusion. Does that mean all the rest were inferior? No, but these researchers considered only studies with effect sizes they could statistically combine. That’s the downside of all that diversity. Empirical comparisons depend on similarities, which rules out a lot of approaches and narrows the view.

That said, meta-analyses come with bottom lines. They ask big questions and announce straightforward conclusions, as this study does. “In this meta-analysis, we found that peer assessment in general has a nontrivial positive effect on students’ learning performance” (p. 204). If you want it numerically, “Compared to students who did not receive peer assessment, students who did receive peer assessment showed a .291 standard deviation unit improvement in their general performance” (p. 202).

As for factors that positively influenced the learning effect, the authors note that “among the many factors examined, rater training emerged as the strongest factor in explaining the variation of the peer assessment effect” (p. 203). Obviously, the training grows out of what peers are assessing, but pretty much across the board, making judgements against criteria and the opportunity to practice increase the accuracy and usefulness of peer feedback. None of the rest of the factors analyzed reached a level of significance, but three appeared to make a difference: doing peer review more than once, participating in activities that involved both giving and getting feedback, and providing peer feedback anonymously.

This empirical view of student peer review closely aligns with what’s commonly reported in the literature that describes its use. But the review does offer one surprising result. “We found that peer assessment not only generated a larger effect than no assessment but also showed a more positive effect than teacher assessment” (p. 202). In other words, what students gleaned from their peers had a greater effect on their learning than feedback teachers provided. Are there reasons why that might be so? The research team suggests that teacher feedback doesn’t develop student self-reliance. Boud and Malloy (2013) elaborate, explaining that students should not be exclusively “dependent on a drip feed of comments from teachers” (p. 706) but should function as self-regulated learners able to calibrate their own judgements. Ultimately, students should be able to figure out for themselves when a paper, performance, or other product meets a specified set of standards, and teacher feedback usually doesn’t address how to make those assessments.

Does peer feedback develop the ability to make accurate self-assessments? That’s not a question this review answers. I wondered whether peer feedback’s effectiveness might be explained by something as simple as its immediacy. It comes from an equal—someone taking the course, maybe even in the same group. When peers are involved, those assessments mean something; what the teacher thinks matters but what classmates say matters more. So, the student pays closer attention to peer feedback and thereby learns more from it.

The picture offered by a meta-analysis is larger than that offered by a single study, a collection of discipline-based work, or research that explores peer review in a particular context, such as group work. Even so, we may stand back, but we are looking at this and many other instructional approaches only seeing part of the big picture.

References

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462

Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193–211. https://doi.org/10.1080/02602938.2019.1620679

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

Sometimes it’s good to step back and take a look at something from a distance. Meta-analyses provide some of that perspective. They take a bundle of individual studies, combine their findings, and offer an empirical view of a phenomenon—in this instance, students reviewing their peers. A recent analysis inquired about the general effects of peer review on learning and the factors that influence those effects. Do the many benefits proposed actually accrue when students exchange feedback with their peers? The researchers note that an array of individual studies offer mixed results.

For Those Who Teach from Maryellen Weimer

Those uneven results aren’t surprising given the diversity of what counts as peer review—for instance, exchanging feedback over written work, evaluating the contributions of group members, critiquing performances of various sorts. The researchers classified these variables as the peer review setting. Still more variation surrounds the logistical details associated with the use of peer review—what the researchers classified as its assessment mode. Does the peer feedback come in the form of ratings, comments, or both? Are criteria involved? Is more than one assessment made? And finally, what about feedback reciprocity (it’s given and received)? Are the reviewers anonymous? Is there more than one reviewer?

Like most complex phenomena, student peer review has spawned a plethora of studies. The researchers rounded up 350 published between 1950 and 2017. Only 58 met their criteria for inclusion. Does that mean all the rest were inferior? No, but these researchers considered only studies with effect sizes they could statistically combine. That’s the downside of all that diversity. Empirical comparisons depend on similarities, which rules out a lot of approaches and narrows the view.

That said, meta-analyses come with bottom lines. They ask big questions and announce straightforward conclusions, as this study does. “In this meta-analysis, we found that peer assessment in general has a nontrivial positive effect on students’ learning performance” (p. 204). If you want it numerically, “Compared to students who did not receive peer assessment, students who did receive peer assessment showed a .291 standard deviation unit improvement in their general performance” (p. 202).

As for factors that positively influenced the learning effect, the authors note that “among the many factors examined, rater training emerged as the strongest factor in explaining the variation of the peer assessment effect” (p. 203). Obviously, the training grows out of what peers are assessing, but pretty much across the board, making judgements against criteria and the opportunity to practice increase the accuracy and usefulness of peer feedback. None of the rest of the factors analyzed reached a level of significance, but three appeared to make a difference: doing peer review more than once, participating in activities that involved both giving and getting feedback, and providing peer feedback anonymously.

This empirical view of student peer review closely aligns with what’s commonly reported in the literature that describes its use. But the review does offer one surprising result. “We found that peer assessment not only generated a larger effect than no assessment but also showed a more positive effect than teacher assessment” (p. 202). In other words, what students gleaned from their peers had a greater effect on their learning than feedback teachers provided. Are there reasons why that might be so? The research team suggests that teacher feedback doesn’t develop student self-reliance. Boud and Malloy (2013) elaborate, explaining that students should not be exclusively “dependent on a drip feed of comments from teachers” (p. 706) but should function as self-regulated learners able to calibrate their own judgements. Ultimately, students should be able to figure out for themselves when a paper, performance, or other product meets a specified set of standards, and teacher feedback usually doesn’t address how to make those assessments.

Does peer feedback develop the ability to make accurate self-assessments? That’s not a question this review answers. I wondered whether peer feedback’s effectiveness might be explained by something as simple as its immediacy. It comes from an equal—someone taking the course, maybe even in the same group. When peers are involved, those assessments mean something; what the teacher thinks matters but what classmates say matters more. So, the student pays closer attention to peer feedback and thereby learns more from it.

The picture offered by a meta-analysis is larger than that offered by a single study, a collection of discipline-based work, or research that explores peer review in a particular context, such as group work. Even so, we may stand back, but we are looking at this and many other instructional approaches only seeing part of the big picture.

References

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462

Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193–211. https://doi.org/10.1080/02602938.2019.1620679