Peer Review of Writing: An Evidence-Based Strategy?

Credit: iStock.com/Bablab
Credit: iStock.com/Bablab
Getting a handle on the effectiveness of widely used instructional strategies is a challenge. They’re used in different fields and with broadly divergent design details. Moreover, studying the effects of strategy as it’s being used in a classroom presents research challenges and an array of possible methodological approaches. Bottom line: the quality of the work varies, and so do the results. This is why, as I’ve written before, that although it’s popular to refer to a strategy as an “evidence-based strategy,” most of the time the “evidence” contains lots of caveats.

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

Getting a handle on the effectiveness of widely used instructional strategies is a challenge. They’re used in different fields and with broadly divergent design details. Moreover, studying the effects of strategy as it’s being used in a classroom presents research challenges and an array of possible methodological approaches. Bottom line: the quality of the work varies, and so do the results. This is why, as I’ve written before, that although it’s popular to refer to a strategy as an “evidence-based strategy,” most of the time the “evidence” contains lots of caveats.

For Those Who Teach from Maryellen Weimer

Nonetheless, we should tip our hats to any research team that attempts to collect and review studies on a particular strategy: case in point, this meta-analysis of the effects of formative peer review on student writing. The team (Huisman et al., 2019) begins by noting that “a quantitative synthesis of the research is still lacking for the impact that peer feedback has on students’ academic writing performance” (p. 863). There’s lots of literature on students providing each other feedback on their writing, and most of it makes claims about the benefits. Students get feedback that improves their writing, they learn something about the kinds of comments that support revision, and they get experience delivering criticism.

Does the research provide support for those broad claims? Not when they’re this expansive. Research questions are more specific and focused. So, in this review, the team looked at peer feedback impact in two areas. First, they assessed peer effectiveness: “to what extent . . . peer feedback improve[s] students’ writing performance in comparison to: (a) receiving no feedback at all, (b) self-assessment and (c) feedback from teaching staff” (p. 866). Second, they considered two design variables: “the nature of the peer feedback (qualitative comments, quantitative grades/ranks, or a combination of both) and . . . the number of peers that students engaged with during peer feedback” (p. 866).

Their search identified 287 articles that looked at peer feedback in higher education. To be included in this review, the articles needed to meet six criteria, and only 24 studies did. Because the sample size is small and most of the studies generated data relevant to only one or two of the areas being reviewed, the researchers repeatedly caution against general conclusions. In other words, there’s evidence but not very much of it.

These researchers found that engagement in peer feedback resulted in greater writing improvement than either no feedback or self-assessment did. Peer feedback and teacher feedback resulted in similar writing improvements: “Specifically, a combination of both comments and grades tended to result in larger writing improvements than either comments or grades alone” (p. 875). Finally, there was a pattern indicating that students might benefit more from multiple peer reviewers than from just one, but that pattern was “nonsignificant” (p. 876).

It would be easy to look at a review like this and conclude that what the authors found was pretty obvious. Of course, some peer feedback is better than no feedback, and most student writers are notoriously subjective about their work, so peers are bound to provide at least some insights. It also makes sense that grades and comments along with more than one peer reviewer might have a greater effect. The only surprise was the failure of teacher feedback to generate more improvement that the feedback provided by peers.

But we shouldn’t discount efforts like this. The primary purpose of meta-analytic reviews is to update the status of current research so that subsequent inquiries can move forward in useful directions. Practitioners are not the primary audience. This team deserves credit for focusing the review on issues relevant to practice.

Peer review of writing is a well-established practice, and that means it rests on a solid experiential foundation, or what’s usually derisively referred to as anecdotal evidence. I don’t think what practitioners observe and experience should be written off that cavalierly. It’s just that the diverse ways we use peer review make it hard to build a functional knowledge base. Even so, we are inclined to make assumptions about effects, and that’s why it’s valuable to try to measure, in this case, what happens to a student’s writing when a peer has reviewed it. And as this review of the research makes clear, there’s scant quantitative evidence. What we believe about peer review’s benefits has experiential support, but its empirical effects remain largely unknown.

Reference

Huisman, B., Saab, N., van den Broek, P., & van Driel, J. (2019). The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis. Assessment & Evaluation in Higher Education, 44(6), 863–880. https://doi.org/10.1080/02602938.2018.1545896 [open access]


To sign up for weekly email updates from The Teaching Professor, visit this link.