Better Understanding Why and How Cases Promote Learning

Promote Learning

Clyde Herreid, a biology professor at SUNY Buffalo, has been a leader in the use of case studies in science teaching. His interest in “stories with an educational message” began in the 80s and has resulted in the creation of a large collection of cases (http://sciencecases.lib.buffalo.edu) along with the National Center for Case Study Teaching in Science, which he directs. His work on cases, now involving a variety of collaborators, also includes an ongoing research program that explores how case study teaching methods and learning outcomes interact. The two references below are recent examples of this research work.

The study, with results reported in two different articles, was undertaken in large biology courses taught by three different instructors and at two different universities. It explores a range of different questions and issues, some of which are highlighted here. Each of the instructors used the same eight cases in two different sections of the course. In all sections, clicker questions were used throughout the course, and their use during the cases was studied by the research team.

The study has a number of practical implications, starting with the fact that clickers make it possible to use case studies even in large courses (all of these sections enrolled over 100 students, some with as many as 300). Terry et al. (2016) explain that the case “storyline can unfold in sections, very much like the chapters in a book” (p. 82). Students can ask and answer questions along the way with the teacher keeping track of and displaying the clicker-provided responses.

That cases effectively promote learning gains is well established, but most of that work has been done in small classes where cases are discussed. The research of Terry et al. (2016) demonstrates cases’ viability in large courses: “Student learning in all case subject areas occurred across the board during the semester in every one of the six classes” (p. 86). Cases are effective because they deeply immerse students in the material they need to learn. Students find the stories intriguing and must use course content to explain their outcomes.

The researchers were also interested in the nature of the clicker questions. Did it matter if the questions were of lower order (fact based) or higher order (critical thinking)? There is a good bit of research supportive of the importance of higher-order questions. Such questions promote deeper learning, and this belief is widely promoted by the literature on questioning, which criticizes exams that rely on fact-based questions as well as teachers whose in-class discussion never gets beyond questions with right-and-wrong answers. Interestingly, Terry et al. (2016) state, “there were no differences in learning gains between situations when LO [lower-order] questions were used versus HO [higher-order] questions.” (p. 82). That was not what the team expected to find. They propose a variety of possible explanations for this surprising result. They used pre- and post-tests to measure learning gains, and the questions on content learned via the cases were both HO and LO questions (two of each for each case and one not covered by case content but relevant to the topic), so the separation was not complete. Perhaps their most convincing explanation involves the number of cases used in the course—8 in 40 class sessions. Perhaps 8 cases is not enough to produce measurable changes in critical thinking.

Herreid et al. (2014) focus on the nature of the case itself. The research team was intrigued by some earlier work showing that some cases led to more learning than others. Those early researchers surmised that cases producing greater learning gains did so by generating an emotional response that more effectively engaged students with the content, and it is that assumption the Herreid team tested: “We anticipated that cases which involved characters who were real people, were contemporary, and displayed . . . eight basic emotions [identified in other research] (anger, fear, sadness, disgust, surprise, anticipation, trust, joy) might have the greatest effect on the student learning” (p. 87). They also suggest that students would engage more deeply with cases that piqued their curiosity and interest.

And what did they find? Their work confirms that “cases differ in their emotional tone and their engagement potential” (Herreid et. al. 2014, p. 92). Based on student ratings of how effectively the case engaged them and engendered emotional involvement, the researchers found a “minor but statistically significant correlation among engagement, emotion, and learning gains” (86). Students consistently reported that their engagement with the case was higher than their emotional involvement with it. As for learning gains, the case that engendered the highest learning gains was not highly rated on either the engagement or emotional dimensions.

Herreid et al. (2014) conclude with a pragmatic question: “So, what does this all mean to the design of cases?” (p. 94). It’s not a question that can be clearly answered by this research, but the work offers some intriguing hints. It also shows that explaining why and how an instructional device like cases promotes learning is no simple matter.

References:

Herreid, C., Terry, D., Lemons, P. Armstrong, N., Brickman, P., and Ribbens, E. (2014). Emotion, engagement and case studies. Journal of College Science Teaching, 44(1), 86–95.

Terry, D., Lemons, P., Armstrong, N., Brickman, P., Ribbens, E., and Herreid, C. (2016). Eight is not enough: The level of questioning and its impact on learning in clicker cases. Journal of College Teaching, 46(2), 82–92.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Clyde Herreid, a biology professor at SUNY Buffalo, has been a leader in the use of case studies in science teaching. His interest in “stories with an educational message” began in the 80s and has resulted in the creation of a large collection of cases (http://sciencecases.lib.buffalo.edu) along with the National Center for Case Study Teaching in Science, which he directs. His work on cases, now involving a variety of collaborators, also includes an ongoing research program that explores how case study teaching methods and learning outcomes interact. The two references below are recent examples of this research work. The study, with results reported in two different articles, was undertaken in large biology courses taught by three different instructors and at two different universities. It explores a range of different questions and issues, some of which are highlighted here. Each of the instructors used the same eight cases in two different sections of the course. In all sections, clicker questions were used throughout the course, and their use during the cases was studied by the research team. The study has a number of practical implications, starting with the fact that clickers make it possible to use case studies even in large courses (all of these sections enrolled over 100 students, some with as many as 300). Terry et al. (2016) explain that the case “storyline can unfold in sections, very much like the chapters in a book” (p. 82). Students can ask and answer questions along the way with the teacher keeping track of and displaying the clicker-provided responses. That cases effectively promote learning gains is well established, but most of that work has been done in small classes where cases are discussed. The research of Terry et al. (2016) demonstrates cases' viability in large courses: “Student learning in all case subject areas occurred across the board during the semester in every one of the six classes” (p. 86). Cases are effective because they deeply immerse students in the material they need to learn. Students find the stories intriguing and must use course content to explain their outcomes. The researchers were also interested in the nature of the clicker questions. Did it matter if the questions were of lower order (fact based) or higher order (critical thinking)? There is a good bit of research supportive of the importance of higher-order questions. Such questions promote deeper learning, and this belief is widely promoted by the literature on questioning, which criticizes exams that rely on fact-based questions as well as teachers whose in-class discussion never gets beyond questions with right-and-wrong answers. Interestingly, Terry et al. (2016) state, “there were no differences in learning gains between situations when LO [lower-order] questions were used versus HO [higher-order] questions.” (p. 82). That was not what the team expected to find. They propose a variety of possible explanations for this surprising result. They used pre- and post-tests to measure learning gains, and the questions on content learned via the cases were both HO and LO questions (two of each for each case and one not covered by case content but relevant to the topic), so the separation was not complete. Perhaps their most convincing explanation involves the number of cases used in the course—8 in 40 class sessions. Perhaps 8 cases is not enough to produce measurable changes in critical thinking. Herreid et al. (2014) focus on the nature of the case itself. The research team was intrigued by some earlier work showing that some cases led to more learning than others. Those early researchers surmised that cases producing greater learning gains did so by generating an emotional response that more effectively engaged students with the content, and it is that assumption the Herreid team tested: “We anticipated that cases which involved characters who were real people, were contemporary, and displayed . . . eight basic emotions [identified in other research] (anger, fear, sadness, disgust, surprise, anticipation, trust, joy) might have the greatest effect on the student learning” (p. 87). They also suggest that students would engage more deeply with cases that piqued their curiosity and interest. And what did they find? Their work confirms that “cases differ in their emotional tone and their engagement potential” (Herreid et. al. 2014, p. 92). Based on student ratings of how effectively the case engaged them and engendered emotional involvement, the researchers found a “minor but statistically significant correlation among engagement, emotion, and learning gains” (86). Students consistently reported that their engagement with the case was higher than their emotional involvement with it. As for learning gains, the case that engendered the highest learning gains was not highly rated on either the engagement or emotional dimensions. Herreid et al. (2014) conclude with a pragmatic question: “So, what does this all mean to the design of cases?” (p. 94). It's not a question that can be clearly answered by this research, but the work offers some intriguing hints. It also shows that explaining why and how an instructional device like cases promotes learning is no simple matter. References: Herreid, C., Terry, D., Lemons, P. Armstrong, N., Brickman, P., and Ribbens, E. (2014). Emotion, engagement and case studies. Journal of College Science Teaching, 44(1), 86–95. Terry, D., Lemons, P., Armstrong, N., Brickman, P., Ribbens, E., and Herreid, C. (2016). Eight is not enough: The level of questioning and its impact on learning in clicker cases. Journal of College Teaching, 46(2), 82–92.