Low-Stakes Grading

Like a lot of terms in higher education, low-stakes grading doesn’t always refer to the same thing. In some cases it means small assignments that don’t count for much but occur regularly, as quizzes are often used. Low-stakes grading can also mean there’s a de-emphasis on correct answers with credit being awarded for completed answers. Upon first consideration, many faculty might wonder why a teacher would opt for the second approach. If the credit is based on completion, how likely are students to expend anything more than minimal effort?

Three studies offer an interesting answer to that question. All three explored peer collaboration where students were working together on questions and then providing answers via clickers (classroom response systems).

In the 2006 study by James, students in two physics courses taught by two different instructors were asked three to five multiple-choice clicker questions during each period. They were given time to discuss possible answers with those seated nearby before submitting their individual answers. In one of the “high-stakes grading” courses, answers to the clicker questions counted for 12.5 percent of the overall course grade. Incorrect answers earned one-third the credit awarded for correct answers. In the “low-stakes grading class” these questions counted for 20 percent of the overall course grade and incorrect answers earned as much credit as correct ones. In this study, a subset of student conversations about the clicker questions were recorded. Student comments were then categorized based on things such as whether the comment stated an answer preference, provided justification for the answer, posed a question or idea for consideration, rephrased an idea, and so on.

“An interesting statistical difference between the classes was observed in the degree that conversations were dominated by one member of the conversing pair.” The total number of ideas offered by one student was divided by the total number exchanged by the pair. The difference between the fractional contributions was labeled conversation “bias.” The mean discourse bias was much higher when the grading was high stakes. “In the high stakes classroom, students with more knowledge tended to be more dominant in CRS [classroom response system] peer conversation. . . .” (p. 690)

From one perspective, this confirms what worries so many faculty about student collaboration. The more knowledgeable students are simply giving answers to those who know less. But before drawing conclusions against collaboration, consider what happened in the low-stakes grading class. Here conversation bias was significantly lower (mean discourse bias score 14.8 percent, s.d. 10.9 percent) than in the high-stakes grading course (mean discourse bias score 33.2 percent, s.d. 30.1 percent). Conversations were more balanced in the low-stakes classroom, with ideas put forth more evenly by both partners. Disagreement was at a significantly higher level.

James, now collaborating with a group of colleagues, followed up with a second study, this time in two large-enrollment introductory astronomy courses over two semesters with the same high-stakes and low-stakes grading schemes. The original findings were confirmed in the second study. “The assessment practices of instructors using the peer instruction technique for large-enrollment science courses have significant impact on the peer discourse that occurs in response to this technique.” (p. 5) In other words, knowledgeable students do the work for partners when there’s more credit for the correct answer. When credit is awarded for answering, it’s the conversation, and one could surmise, the learning that benefits. Under low-stakes conditions, students feel freer to ask questions, to disagree, to offer justifications, to explain further, which means by the time the answer is clicked in, both partners have explored the question more fully.

Using a very different study design, Turpen and Finkelstein came to a similar conclusion. Their study also explored peer instruction in this case along three dimensions; faculty-student collaboration, student-student collaboration and sense-making vs. answer-making. In one of the three classes focused on in the study, the instructor used a low-stakes approach to grading clicker answers. The other two instructors awarded extra credit for clicker responses, with correct answers counting more than incorrect ones. The low-stakes approach was one of several factors that contributed to greater student-student interaction and a larger focus on sense-making (figuring out why an answer was correct) than answer-making (getting the answer).

These studies are very context specific, but they do illustrate an important principle. The features of grading policies powerfully influence learning, and that’s true in terms of what students learn and how they learn it. In this case, the use of low-stakes grading reveals some cracks in widely held assumptions about peer collaboration. It may not be the collaboration that causes knowledgeable students to dominate exchanges but the grading policy that emphasizes correct answers over conversations as to answer possibilities and justifications.

Reference: James, M.C., (2006). The effect of grading incentive on student discourse in Peer Instruction. American Journal of Physics, 74 (8), 689–691.

James, M.C., Barbieri, F., and Garcia, P., (2008). What are they talking about? Lessons learned from a study of peer instruction. Astronomy Education Review, 7 (1), 1–7.

Turpen, C., and Finkelstein, N.D., (2010). The construction of different classroom norms during Peer Instruction: Students perceive differences. Physical Review Special Topics–Physics Education Research, 6, 020123-1-4.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Like a lot of terms in higher education, low-stakes grading doesn't always refer to the same thing. In some cases it means small assignments that don't count for much but occur regularly, as quizzes are often used. Low-stakes grading can also mean there's a de-emphasis on correct answers with credit being awarded for completed answers. Upon first consideration, many faculty might wonder why a teacher would opt for the second approach. If the credit is based on completion, how likely are students to expend anything more than minimal effort? Three studies offer an interesting answer to that question. All three explored peer collaboration where students were working together on questions and then providing answers via clickers (classroom response systems). In the 2006 study by James, students in two physics courses taught by two different instructors were asked three to five multiple-choice clicker questions during each period. They were given time to discuss possible answers with those seated nearby before submitting their individual answers. In one of the “high-stakes grading” courses, answers to the clicker questions counted for 12.5 percent of the overall course grade. Incorrect answers earned one-third the credit awarded for correct answers. In the “low-stakes grading class” these questions counted for 20 percent of the overall course grade and incorrect answers earned as much credit as correct ones. In this study, a subset of student conversations about the clicker questions were recorded. Student comments were then categorized based on things such as whether the comment stated an answer preference, provided justification for the answer, posed a question or idea for consideration, rephrased an idea, and so on. “An interesting statistical difference between the classes was observed in the degree that conversations were dominated by one member of the conversing pair.” The total number of ideas offered by one student was divided by the total number exchanged by the pair. The difference between the fractional contributions was labeled conversation “bias.” The mean discourse bias was much higher when the grading was high stakes. “In the high stakes classroom, students with more knowledge tended to be more dominant in CRS [classroom response system] peer conversation. . . .” (p. 690) From one perspective, this confirms what worries so many faculty about student collaboration. The more knowledgeable students are simply giving answers to those who know less. But before drawing conclusions against collaboration, consider what happened in the low-stakes grading class. Here conversation bias was significantly lower (mean discourse bias score 14.8 percent, s.d. 10.9 percent) than in the high-stakes grading course (mean discourse bias score 33.2 percent, s.d. 30.1 percent). Conversations were more balanced in the low-stakes classroom, with ideas put forth more evenly by both partners. Disagreement was at a significantly higher level. James, now collaborating with a group of colleagues, followed up with a second study, this time in two large-enrollment introductory astronomy courses over two semesters with the same high-stakes and low-stakes grading schemes. The original findings were confirmed in the second study. “The assessment practices of instructors using the peer instruction technique for large-enrollment science courses have significant impact on the peer discourse that occurs in response to this technique.” (p. 5) In other words, knowledgeable students do the work for partners when there's more credit for the correct answer. When credit is awarded for answering, it's the conversation, and one could surmise, the learning that benefits. Under low-stakes conditions, students feel freer to ask questions, to disagree, to offer justifications, to explain further, which means by the time the answer is clicked in, both partners have explored the question more fully. Using a very different study design, Turpen and Finkelstein came to a similar conclusion. Their study also explored peer instruction in this case along three dimensions; faculty-student collaboration, student-student collaboration and sense-making vs. answer-making. In one of the three classes focused on in the study, the instructor used a low-stakes approach to grading clicker answers. The other two instructors awarded extra credit for clicker responses, with correct answers counting more than incorrect ones. The low-stakes approach was one of several factors that contributed to greater student-student interaction and a larger focus on sense-making (figuring out why an answer was correct) than answer-making (getting the answer). These studies are very context specific, but they do illustrate an important principle. The features of grading policies powerfully influence learning, and that's true in terms of what students learn and how they learn it. In this case, the use of low-stakes grading reveals some cracks in widely held assumptions about peer collaboration. It may not be the collaboration that causes knowledgeable students to dominate exchanges but the grading policy that emphasizes correct answers over conversations as to answer possibilities and justifications. Reference: James, M.C., (2006). The effect of grading incentive on student discourse in Peer Instruction. American Journal of Physics, 74 (8), 689–691. James, M.C., Barbieri, F., and Garcia, P., (2008). What are they talking about? Lessons learned from a study of peer instruction. Astronomy Education Review, 7 (1), 1–7. Turpen, C., and Finkelstein, N.D., (2010). The construction of different classroom norms during Peer Instruction: Students perceive differences. Physical Review Special Topics–Physics Education Research, 6, 020123-1-4.