Making the Grading Process More Transparent

College teachers are always on the outlook for ways to help students better understand why their paper, essay answer, or project earned a particular grade. Many students aren’t objective assessors of their own work, especially when there’s a grade involved, and others can’t seem to understand how the criteria the instructor used applies to their work.

As the author Matthew Bamber notes, grading is not a transparent process to students, even if they have been given the criteria or rubric beforehand. He devised an exercise for his master’s-level accounting and finance students that they found “eye-opening.” In the UK, students “sit” for lengthy exams—in this case, a three-hour, closed-book essay test. In the exercise, students began by answering one lengthy essay question. When finished, they were given a suggested answer to the question (it contained a problem they had to solve and a written analysis), a marking guide, and a set of grade descriptors. Then they were given an anonymous answer to the same question and told to grade it using the materials provided. After having completed that step, students were given a teacher-graded copy of the anonymous answer. The exercise concluded with students being told to grade their answer to the question.

What an experience for students! You can see why they’d call it eye-opening. Adaptation of the whole exercise or parts of it could profitably be used during exam review sessions. Students could work individually or in groups with a problem or an essay question, maybe one from an exam used in a previous course. The grading criteria or a rubric will be helpful when students have little or no experience grading exam answers or other kinds of written work. The exercise is made powerful when the grades given by the students are compared with those given by the teacher and students can see how the teacher has applied the grading criteria differently than they did. In the case of these accounting-finance students, they saw not only the final score but also how the instructor scored the various parts of the answer.

If students are new to grading, it may be better to start by having them look at an anonymous answer first. Author-instructor Bamber notes that the quality of the answer students grade is important. If it’s a superstar answer with little that needs correcting, students may not learn much from the process. On the other hand, they may not learn very much if the answer is poor and does not exemplify any aspects of a good answer. In his case, instructor Bamber chose an answer in the midrange.

Early on as well, students may benefit from group collaborations. They can start by making an individual assessment of the answer. Then when they convene as a group, those individual assessments can be shared and used to generate a group assessment of the answer. This gives students the opportunities to hear how others read and responded to the answer.

The idea of having students look at their own work with the objective of grading it is the kind of experience that begins to develop accurate assessment abilities. The accounting-finance students were surprisingly accurate in their assessment of their answers after having graded the anonymous one. Obviously, this part of the exercise must be designed so that it encourages students to be honest. It’s not about the grade they want but the one they think their work deserves. Here too, having the criteria, rubrics, or checklists is helpful. An incentive may be necessary as well, since students may not see any reason to take the activity seriously. Whatever grade they give their answer doesn’t count.

The accounting-finance students in this article gained an appreciation for the grading process. “Many indicated their surprise at the amount of time, energy and concentration that this exercise required of them.” (p. 480) They also expressed concerns about the integrity of the process when teachers have many answers to grade. The instructor concludes, “The findings suggest that this exercise has the potential to enhance participants’ understanding of a subject as well as its assessment criteria.” Even though the exercise as described here probably can’t be replicated in most North American courses, adaptations of it can. The exercise in various iterations has the potential to make the assessment of graded work more transparent to students. 

Reference: Bamber, M., (2015). The impact of stakeholder confidence of increased transparency in the examination assessment process. Assessment & Evaluation in Higher Education, 40 (4), 471-487.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
College teachers are always on the outlook for ways to help students better understand why their paper, essay answer, or project earned a particular grade. Many students aren't objective assessors of their own work, especially when there's a grade involved, and others can't seem to understand how the criteria the instructor used applies to their work. As the author Matthew Bamber notes, grading is not a transparent process to students, even if they have been given the criteria or rubric beforehand. He devised an exercise for his master's-level accounting and finance students that they found “eye-opening.” In the UK, students “sit” for lengthy exams—in this case, a three-hour, closed-book essay test. In the exercise, students began by answering one lengthy essay question. When finished, they were given a suggested answer to the question (it contained a problem they had to solve and a written analysis), a marking guide, and a set of grade descriptors. Then they were given an anonymous answer to the same question and told to grade it using the materials provided. After having completed that step, students were given a teacher-graded copy of the anonymous answer. The exercise concluded with students being told to grade their answer to the question. What an experience for students! You can see why they'd call it eye-opening. Adaptation of the whole exercise or parts of it could profitably be used during exam review sessions. Students could work individually or in groups with a problem or an essay question, maybe one from an exam used in a previous course. The grading criteria or a rubric will be helpful when students have little or no experience grading exam answers or other kinds of written work. The exercise is made powerful when the grades given by the students are compared with those given by the teacher and students can see how the teacher has applied the grading criteria differently than they did. In the case of these accounting-finance students, they saw not only the final score but also how the instructor scored the various parts of the answer. If students are new to grading, it may be better to start by having them look at an anonymous answer first. Author-instructor Bamber notes that the quality of the answer students grade is important. If it's a superstar answer with little that needs correcting, students may not learn much from the process. On the other hand, they may not learn very much if the answer is poor and does not exemplify any aspects of a good answer. In his case, instructor Bamber chose an answer in the midrange. Early on as well, students may benefit from group collaborations. They can start by making an individual assessment of the answer. Then when they convene as a group, those individual assessments can be shared and used to generate a group assessment of the answer. This gives students the opportunities to hear how others read and responded to the answer. The idea of having students look at their own work with the objective of grading it is the kind of experience that begins to develop accurate assessment abilities. The accounting-finance students were surprisingly accurate in their assessment of their answers after having graded the anonymous one. Obviously, this part of the exercise must be designed so that it encourages students to be honest. It's not about the grade they want but the one they think their work deserves. Here too, having the criteria, rubrics, or checklists is helpful. An incentive may be necessary as well, since students may not see any reason to take the activity seriously. Whatever grade they give their answer doesn't count. The accounting-finance students in this article gained an appreciation for the grading process. “Many indicated their surprise at the amount of time, energy and concentration that this exercise required of them.” (p. 480) They also expressed concerns about the integrity of the process when teachers have many answers to grade. The instructor concludes, “The findings suggest that this exercise has the potential to enhance participants' understanding of a subject as well as its assessment criteria.” Even though the exercise as described here probably can't be replicated in most North American courses, adaptations of it can. The exercise in various iterations has the potential to make the assessment of graded work more transparent to students.  Reference: Bamber, M., (2015). The impact of stakeholder confidence of increased transparency in the examination assessment process. Assessment & Evaluation in Higher Education, 40 (4), 471-487.