Reenvisioning Rubrics: A Few Brief Suggestions

Reenvisioning Rubrics
Linda Suskie's Assessing Student Learning documents a wide variety of common assessment errors. They result from the subjective nature of grades in all but the most factual subjects. Many failures point to the need for more objectivity and a better system of accountability, including leniency, generosity, and severity errors; halo, contamination, similar-to-me, and first-impression biases; and that most common of errors, rater drift—that is, the unintentional redefining of scoring criteria as the marker grows tired.

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

Linda Suskie's Assessing Student Learning documents a wide variety of common assessment errors. They result from the subjective nature of grades in all but the most factual subjects. Many failures point to the need for more objectivity and a better system of accountability, including leniency, generosity, and severity errors; halo, contamination, similar-to-me, and first-impression biases; and that most common of errors, rater drift—that is, the unintentional redefining of scoring criteria as the marker grows tired. There is no perfect solution to the challenges of meaningful grading, but many of us have found that rubrics help move us toward greater objectivity. They do so by breaking the desired outcomes into individual elements. However, when rubrics rely on general terms like excellent, good, fair, and poor, they can still be highly subjective. Those terms encourage instructors merely to get a general “feel” for a student's work and, based on this initial impression, subconsciously (or consciously) assess accordingly across items in the rubric. More detailed descriptions of these terms can improve outcomes, but those explanations can become rigid and confusing. Here are some suggestions which, although still not perfect, have been helpful for me and my colleagues at my institution. They are approaches that also help my students better understand what instructors need from them. After applying these suggestions, I find that students don't repeat the same mistakes as often as they did when I used more generic terms on my rubrics.
  1. Replace evaluative headings with more descriptive terms: for example, “clearly evident,” “evident but in need of some development,” “evident but in need of a lot of development,” and “not evident.” It is difficult to entirely remove the subjective element from qualitative assessment, but I have found that students understand these descriptors better than more evaluative headings.
  2. Use the headings “extensive treatment,” “moderate treatment,” and “no treatment” when the assignment focuses on a dialogue between theory and practice. For example, when looking at the cultural and social factors that influence a specific case study, there are multiple areas in which a student might engage with the theory; it is not necessary to address every area in every case. Assessment should rest on the areas selected and the balance between the areas addressed. There's more subjectivity involved here so I don't regularly use terms such as these.
  3. Provide the rubric in advance. I know there is significant debate on this point; some decry the possibility of undermining creativity and initiation if students approach the rubric in a rigid and mechanistic fashion. However, since I began providing the rubric up front, student complaints about assessments have dramatically declined. Many students have found the rubrics helpful guidelines that develop their critical writing skills.
  4. Include a comments section following the rubric table, and provide more positive than negative comments. Students are more willing to look at areas in need of improvement if they sense they have made progress on the journey. As a basic rule of thumb, I have found that my students can only cope with a maximum of three negative comments on their work. If students are flooded with too many suggestions, they end up ignoring them all.
  5. Find positive ways to give a negative critique. For example, “The next time you do work like this I would urge you to consider the following . . .”
  6. Don't place a grade anywhere on the paper or the rubric. My experience has been that the moment students see the grade, that's all they think about. They pay more attention to the grade than to the feedback you've provided. We have to give grades eventually; but if we can delay this, then there is a better chance that students will focus on the feedback.
  7. Have students self-assess using the rubric. The ability to make judgments about your own work is an essential metacognitive skill. With practice, student self-assessment skills can grow. This also has the side benefit of letting you know the extent to which you have adequately taught not merely the content of the course but also the methodological elements. For example, through a student self-assessment you are able to see whether they are able to judge whether they have clearly stated their thesis or provided a critical reflection on differing perspectives of an issue.
  8. Require students to respond to your assessment of their work, describing ways in which they might do similar work differently in future. You could do this before giving them the final grade. One of our faculty members includes student responses to the assessment as 10 percent of the final course grade. Approaches like this encourage a detailed reading of the comments you've provided.
Rubrics aren't perfect, but they help; at least, they have in my experience. They make it easier for me to accomplish the key purpose of assessment, which is learning. Any tool we use should be designed so that it strengthens the quality of students' learning. Perry Shaw, Arab Baptist Theological Seminary, Lebanon, can be reached at pshaw@ABTSLebanon.org.