Assessing Team Members

Teachers who use group work frequently incorporate some sort of peer assessment activity as a means of encouraging productive interactions within the group. If part of the grade for the group work depends on an assessment by fellow group members, students tend to take their contributions to the group more seriously. Often teachers use some sort of point distribution system where a given number of points must be divided among members, and they cannot be distributed equally. The problem with these systems is that the feedback they provide lacks specificity. Students don’t know what they are doing that accounts for the score they’ve received, and this makes improvement less likely.

A large faculty team representing the fields of management, education, educational assessment, and engineering education sought to improve assessment options for faculty by developing a behaviorally anchored rating scale (BARS) that could be used for both peer and self-assessment in groups. “By providing descriptions of the behaviors that a team member would display to warrant a particular rating, a BARS instrument could teach students what constitutes good performance and poor performance, building students’ knowledge about teamwork.” (p. 613)

They started with an instrument developed in 2007 that (based on a review of the literature) identified five broad categories of effective teamwork: 1) contributing to the team’s work; 2) interacting with teammates; 3) keeping the team on track; 4) expecting quality; and 5) having relevant knowledge, skills, and abilities. This instrument exists in both a long and a short form, but even the short form requires that students read 33 items and make a judgment about every item for each of their teammates. With four teammates, that’s 132 independent ratings. This research team felt the instrument made peer assessment a fairly daunting task.

The article describes the development process and reports on three empirical investigations that established the validity and reliability of the new, condensed instrument, which is included in the article.

The best features of the new instrument are the behavioral descriptions posted under each of the five categories listed above. They include examples of both positive and negative behaviors. So in the “interacting with teammates” category positive behaviors include asking for and showing interest in teammates’ ideas and contributions, improving communication among teammates, and asking teammates for feedback and using their suggestions to improve. A set of satisfactory behaviors are also listed before the list of negative behaviors, which include interrupting; ignoring; bossing or making fun of teammates; taking actions that affect teammates without their input; and complaining, making excuses, and not interacting with teammates. (p. 626)

There are also benefits accrued when students do self-assessments. Students without much experience working in groups may over- or underestimate their contributions. This way their assessments can be compared with those offered by their teammates, and if this is done fairly early in a group’s work together there is time for individuals to adjust their behaviors.

The research team also developed a practice exercise to help familiarize students with the BARS format. Students are given a written description of the performance of four fictitious team members. Using the BARS form, they rate these fictitious team members and then are given feedback that shows how their ratings compared with those of expert raters. This familiarizes students with the instrument and helps them develop their skills as raters.

This article is an outstanding resource for faculty who use peer assessment in groups. It raises, discusses, and includes references on a wide range of issues related to the assessment of group skills. It shows that if techniques like those described in the article are used, it is possible for students to assess each other and to do so via processes that contribute to the success of the group and that further develop students’ skills as group members.

Reference:

Ohland, M.W., et al. (2012). The comprehensive assessment of team member effectiveness: Development of a behaviorally anchored rating scale for self- and peer evaluation. Academy of Management Learning & Education, 11 (4), 609-630.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon