Teaching through Guided Evaluation

Credit: iStock.com/SDI Productions
Credit: iStock.com/SDI Productions

As faculty we tend to chalk up students’ failure on assessments to lack of effort or lack of understanding of the material. But often that failure is due instead to the gap between instruction and performance, where misunderstandings intervene to undermine performance. Guided evaluation of past assessments as exemplars can be a powerful tool for closing this gap and improving student performance.

Consider the example of coaching. Coaches focus on improving player performance on the field, and so much of their teaching is done through guided evaluation of performances. The first thing NFL players do the day after a game is to sit in film study of that game. They learn about their own performance, what they did right or wrong, and what they need to do to improve. But they also learn from seeing the evaluation of other players’ performances. They see the mistakes they need to avoid and what they should emulate. These evaluations bring the coach’s instruction to life by showing players how to apply it in their performance, thus helping to close the gap between instruction and performance. This also corrects misconceptions players might have formed about how to apply the techniques that they have learned to their performance.

Similarly, faculty have large repositories of student work from prior classes that is fertile (if seldom-used) teaching material. Using past student work as a guide, faculty can clarify their expectations and establish standards of excellence with current students. For instance, students tend to read assigned articles for simple facts, whereas the instructor wants them to read for underlying concepts (Rhem, 2009). An instructor going through different examples of student essays can point out how discussing concepts is the goal of the work and so correct any misunderstanding.

Another benefit of this process is that it can demonstrate how an expert in the field analyzes the types of problems that are given to students. Instructors in quantitative fields like math and physics teach students how to solve problems by applying procedures to examples. But they often leave out the process of analyzing problems that determines which procedures to use. As a result, students search for commonalities between examples to pick the correct procedures, but they often latch onto the wrong commonalities. One study, for instance, found that physics students tend to classify problems in terms of superficial features, such as “circular problems,” because they see shape as common to a particular procedure when it’s just a coincidental feature of the problems used as examples (Chi, 1981). Their instructors instead classify problems according to energy principles, such as conservation of energy, to determine the proper procedures. Instructors do not see this disconnection due to the expert blind spot: the tendency of experts to not understand the problems of novices because they do not see the world the way novices do.

Part of the problem is that instructors tend to show only the correct ways to solve problems, not common errors to avoid. Also, instructors use problems that they have seen and solved before and so tend to bypass the first step of analysis. Walking students through the process of solving problems on an exam allows the instructor to step outside their own perspective by taking the position of a student who is seeing the problem for the first time. They can make their own thinking visible by explaining how they analyze a new problem, demonstrating the correct categories to use, and using student errors on former exams to illuminate common mistakes to avoid.

Faculty can use parts of past student work scrubbed of any identifying content for their examples or create their own. I like to do both. I pick or create essay examples that illustrate common student errors, as well as exemplars that demonstrate achievement of the standards that they should strive toward. As faculty we often teach only the correct processes, forgetting that errors are also a powerful learning device. They also capture our attention. There is a reason advertisements use headlines like “The five most common investment errors that ruin retirement savings” rather than “The principles of retirement investing.” We want to hear about errors to avoid them, and students will perk up their ears when a faculty member tells them that they will learn about the common mistakes they need to avoid to be successful.

After guiding students through various examples, the instructor can then give them new ones to evaluate on their own. This step can include peer evaluations of fellow students that will help each other improve their work. Numerous studies show that peer evaluations benefit not only the person being evaluated but also the person doing the evaluation. Baniya et al. (2019) found that students learned quite a bit from doing peer evaluations. As one put it,

I liked how I could view other [students’] works and see what they did right in order to improve myself. I thought this was helpful because it allowed me to read criticism that I wouldn’t think my project would have. (p. 89)

A similar sentiment was expressed in a peer evaluation study by Canty (2012): “Having completed the assessment I am more confident in my ability to think for myself and produce work to a fairly high level that is also creative and interesting” (p. 231).

The upshot is that as faculty, we tend to focus so much on conveying information to students that we forget about the inference gap between teaching and student application of learning, where students can legitimately infer different things from our instruction. That makes evaluation of performance examples a valuable tool for improving learning and student performance.

References

Baniya, S., Chesley, A., Mentzer, N., Bartholomew, S., Moon, C., & Sherman, D. (2019). Using adaptive comparative judgment in writing assessment: An investigation of reliability among interdisciplinary evaluators. Journal of Technology Studies, 45(2), 24–35. https://doi.org/10.21061/jots.v45i1.a.3

Canty, D. (2012). The impact of holistic assessment using adaptive comparative judgment of student learning [Doctoral dissertation, University of Limerick]. https://ulir.ul.ie/handle/10344/6766

Chi, M., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152. https://doi.org/10.1207/s15516709cog0502_2

Rhem, J. (2009). Deep/surface approaches to learning in higher education: A research update. Essays on Teaching Excellence: Toward the Best in the Academy, 21(8). https://podnetwork.org/content/uploads/V21-N8-Rhem.pdf

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

As faculty we tend to chalk up students’ failure on assessments to lack of effort or lack of understanding of the material. But often that failure is due instead to the gap between instruction and performance, where misunderstandings intervene to undermine performance. Guided evaluation of past assessments as exemplars can be a powerful tool for closing this gap and improving student performance.

Consider the example of coaching. Coaches focus on improving player performance on the field, and so much of their teaching is done through guided evaluation of performances. The first thing NFL players do the day after a game is to sit in film study of that game. They learn about their own performance, what they did right or wrong, and what they need to do to improve. But they also learn from seeing the evaluation of other players’ performances. They see the mistakes they need to avoid and what they should emulate. These evaluations bring the coach’s instruction to life by showing players how to apply it in their performance, thus helping to close the gap between instruction and performance. This also corrects misconceptions players might have formed about how to apply the techniques that they have learned to their performance.

Similarly, faculty have large repositories of student work from prior classes that is fertile (if seldom-used) teaching material. Using past student work as a guide, faculty can clarify their expectations and establish standards of excellence with current students. For instance, students tend to read assigned articles for simple facts, whereas the instructor wants them to read for underlying concepts (Rhem, 2009). An instructor going through different examples of student essays can point out how discussing concepts is the goal of the work and so correct any misunderstanding.

Another benefit of this process is that it can demonstrate how an expert in the field analyzes the types of problems that are given to students. Instructors in quantitative fields like math and physics teach students how to solve problems by applying procedures to examples. But they often leave out the process of analyzing problems that determines which procedures to use. As a result, students search for commonalities between examples to pick the correct procedures, but they often latch onto the wrong commonalities. One study, for instance, found that physics students tend to classify problems in terms of superficial features, such as “circular problems,” because they see shape as common to a particular procedure when it’s just a coincidental feature of the problems used as examples (Chi, 1981). Their instructors instead classify problems according to energy principles, such as conservation of energy, to determine the proper procedures. Instructors do not see this disconnection due to the expert blind spot: the tendency of experts to not understand the problems of novices because they do not see the world the way novices do.

Part of the problem is that instructors tend to show only the correct ways to solve problems, not common errors to avoid. Also, instructors use problems that they have seen and solved before and so tend to bypass the first step of analysis. Walking students through the process of solving problems on an exam allows the instructor to step outside their own perspective by taking the position of a student who is seeing the problem for the first time. They can make their own thinking visible by explaining how they analyze a new problem, demonstrating the correct categories to use, and using student errors on former exams to illuminate common mistakes to avoid.

Faculty can use parts of past student work scrubbed of any identifying content for their examples or create their own. I like to do both. I pick or create essay examples that illustrate common student errors, as well as exemplars that demonstrate achievement of the standards that they should strive toward. As faculty we often teach only the correct processes, forgetting that errors are also a powerful learning device. They also capture our attention. There is a reason advertisements use headlines like “The five most common investment errors that ruin retirement savings” rather than “The principles of retirement investing.” We want to hear about errors to avoid them, and students will perk up their ears when a faculty member tells them that they will learn about the common mistakes they need to avoid to be successful.

After guiding students through various examples, the instructor can then give them new ones to evaluate on their own. This step can include peer evaluations of fellow students that will help each other improve their work. Numerous studies show that peer evaluations benefit not only the person being evaluated but also the person doing the evaluation. Baniya et al. (2019) found that students learned quite a bit from doing peer evaluations. As one put it,

I liked how I could view other [students’] works and see what they did right in order to improve myself. I thought this was helpful because it allowed me to read criticism that I wouldn’t think my project would have. (p. 89)

A similar sentiment was expressed in a peer evaluation study by Canty (2012): “Having completed the assessment I am more confident in my ability to think for myself and produce work to a fairly high level that is also creative and interesting” (p. 231).

The upshot is that as faculty, we tend to focus so much on conveying information to students that we forget about the inference gap between teaching and student application of learning, where students can legitimately infer different things from our instruction. That makes evaluation of performance examples a valuable tool for improving learning and student performance.

References

Baniya, S., Chesley, A., Mentzer, N., Bartholomew, S., Moon, C., & Sherman, D. (2019). Using adaptive comparative judgment in writing assessment: An investigation of reliability among interdisciplinary evaluators. Journal of Technology Studies, 45(2), 24–35. https://doi.org/10.21061/jots.v45i1.a.3

Canty, D. (2012). The impact of holistic assessment using adaptive comparative judgment of student learning [Doctoral dissertation, University of Limerick]. https://ulir.ul.ie/handle/10344/6766

Chi, M., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152. https://doi.org/10.1207/s15516709cog0502_2

Rhem, J. (2009). Deep/surface approaches to learning in higher education: A research update. Essays on Teaching Excellence: Toward the Best in the Academy, 21(8). https://podnetwork.org/content/uploads/V21-N8-Rhem.pdf