A Unique Peer Assessment Activity with Positive Results

Typically, when students review each other's work, it's a formative process. They offer feedback that ostensibly helps with production of the final product, which is then submitted and graded by the teacher. But that's not the only option.

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

Typically, when students review each other's work, it's a formative process. They offer feedback that ostensibly helps with production of the final product, which is then submitted and graded by the teacher. But that's not the only option.

A faculty member teaching sections of an introductory psychology course developed a five-step, double-blind peer assessment activity. It involved a practice quiz with two short essay questions, which students took in class. They identified their work with a number code that they then provided to the instructor on a class roster. The instructor shuffled the quizzes and redistributed them so that each student had a quiz with the number code, but no name. The instructor then projected a grading rubric with correct answer information, which students used to grade the answers. They were free to ask the instructor questions about the information on the rubric.

Once that step was completed, students traded the graded quiz with someone seated nearby and proceeded to grade the second quiz, recording their grade alongside the first. There were no expectations that students would agree upon the grade. After the second quiz had been graded, the two partners explained their rationale for the grades they'd given on both quizzes. The goal of the discussion was to inform, not to persuade. However, if after this discussion, one of the partners wished to change the grade he or she had assigned, that was permissible.

All quizzes were then returned to the instructor, who used the numeric code to return each quiz to the student who wrote it. Students reviewed the feedback provided by their peers. If the two peer graders disagreed on the grade, that triggered an instructor evaluation; the binding grade would be determined by the instructor. If the peer graders agreed and the student agreed with their assessment, that stood as the final grade on the practice quiz. If the peer graders agreed but the student was unwilling to accept their grade, that also triggered an instructor review, with the student invited to share his or her objections to the peer grade.

Of interest to the instructor was whether this peer review activity would affect performance on the course's three unit exams. To answer that, students in a control section completed the same peer review activity, but did so after the exam rather than before it.

In addition to these practice quizzes, students in both sections of the course took nine online mastery quizzes (counting for 13.5 percent of their grade). The instructor assumed that performance on the online quizzes and regular class attendance would impact exam performance, so he controlled for these variables. According to the author, “The present study provides evidence of a beneficial impact of participation in a peer assessment activity on students' performances on subsequent course exams, an effect that holds even after accounting for online mastery quiz performance and rate of attendance” (p. 183)

Given previous research on peer assessment, these results are not surprising. We have reported on previous studies that have explored how looking at the work of peers helps students be more objective about their own work. The value of this particular approach resides in the design features of the activity. This is not bogus grading that doesn't matter or count. If the conditions are met, these student assessments stand. That makes this an authentic activity students are likely to take seriously, which only serves to heighten the value of all peer assessment experiences.

It is encouraging to see more teachers making use of peer assessments. As this activity also illustrates, the benefits do not accrue automatically. Students need guidance—in this case, a rubric and correct answers. They need practice and feedback, both provided by this activity. They assess two different quizzes and then get to compare and discuss their assessments with a peer.

It's a unique and creative approach to peer assessment with design features that develop strong peer assessment skills and improve content learning, as measured by exam scores in this study.

Reference: Jhangiani, R. (2016). The impact of participating in a peer assessment activity on subsequent academic performance. Teaching of Psychology, 43(3), 180–186.