Teacher and Peer Assessments: A Comparison

Teacher and Peer Assessments
Interest in and use of peer assessment has grown in recent years. Teachers are using it for a variety of reasons. It's an activity that can be designed so that it engages students, and if it's well designed, it can also be an approach that encourages students to look at their own work more critically. On the research front, some studies of peer assessment have shown that it promotes critical thinking skills and increases motivation to learn. In addition, peer assessments are a part of many professional positions, which means they're a skill that should be developed in college.

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

Interest in and use of peer assessment has grown in recent years. Teachers are using it for a variety of reasons. It's an activity that can be designed so that it engages students, and if it's well designed, it can also be an approach that encourages students to look at their own work more critically. On the research front, some studies of peer assessment have shown that it promotes critical thinking skills and increases motivation to learn. In addition, peer assessments are a part of many professional positions, which means they're a skill that should be developed in college. But for teachers, there are several lingering questions. What kind of criteria are students using when they assess each other's work? Are those criteria like the ones their teachers are using? Given the importance of grades, can students be objective, or do they only provide positive feedback and high marks? To what extent do peer assessments agree with those offered by the teacher? Falchikov and Goldfinch's (2000) meta-analysis of 48 studies of peer assessment published between 1959 and 1999 reported a moderately strong correction of .69 between teacher and peer assessments done by students. A large educational psychology team decided it was time to update that research, especially given a significant number of digital peer assessments are now being completed. They also wanted to learn more about the impact of certain factors on peer assessments. This team analyzed 69 studies published since 1999. Unlike Falchikov and Goldfinch, they included studies done in K–12 grade levels, although there were a small number of them. They found the estimated average Pearson correlation between peer and teacher ratings was also moderately strong at .63. Most interesting in this recent research are findings about factors related to peer assessment. Here are some highlights: What is noteworthy about this meta-analysis is the attempt to identify factors that affect the accuracy of student judgments about the work of their peers. The analysis assumes that teacher assessments are the gold standard. Students should be making assessments similar to those of the teacher. It is useful to know those factors that help to close the gap between teacher and student assessments. The research team notes, “We included only theoretically meaningful predictors that could be reliably coded. As a result, the current meta-analysis explained only about one-third of the variation of the agreement between peer and teacher ratings” (p. 258). This means there must be other factors influencing the correlation. For example, could the correlations be affected by whether the ratings were formative, designed to help the recipient improve, or whether they were summative, as in counted as part or all of the grade? This is relevant work with findings that should be considered in the decision to use peer assessments. As with so much of the research on instructional practices, the issue is less whether a particular approach is viable and more about the best ways to use it. References: Falchikov, N., and Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70 (3), 287-3. Hongli, L., Xiong, Y., Zang, X., Kornhaber, M., Lyu,Y., Chung, K., and Suen, H. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41 (2), 245–264.