Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...
In the rural part of North Central Pennsylvania where I live, a lot of families have owned the same farmland for generations. Houses are handed down, with each new family adjusting the home to their needs—adding a porch here, a back bedroom there, an attachment between the house and garage. The houses look cobbled together rather than designed. I got to thinking that might be the way some of our courses look. They’re passed along, new material gets added, and current content mostly stays. Sometimes there’s a new approach or another objective, but these additions all get attached to the basic course structure. With the prevalence of online learning, a larger presence of instructional designers, and a greater need for course consistency, we’re starting to see what well-designed courses look like and recognizing that many of the courses we teach don’t look that way.
Course design has rarely been a top or trendy instructional priority. For those without teaching experience, there’s much greater focus on classroom management issues, active learning, and student engagement. The assumption is, I guess, that new teachers will be handling courses that have been taught for years, so they’ll have syllabi, texts, and colleagues they can consult with about what to teach, when, and how.
For those of us who’ve taught a course multiple times, there’s the blindness that comes with expertise—a curse of knowledge that makes it hard for us to see what others don’t understand. In the basic communication course I taught, we use models to explain the relationships between parts of the communication process. I used to fill a blackboard with diagrammatic depictions of senders, receivers, messages, and feedback. Then one day in class a student asked, “What do you mean, ‘models’? A model airplane is a real thing. These models aren’t real. How do they work?” I had a hard time answering.
Expert blind spots make it hard to see course design flaws—missing, ineffective, or misaligned components. Three faculty members at the Air Force Academy (Robinson et al., 2021) wondered whether diagnosing design problems might be easier if faculty considered a fictional course—one called Colorado Mountain Hiking, offered to students who want to learn how to hike or guide others in the Rocky Mountains. Here’s the learning outcome: “Students will be able to safely hike from the base to the summit of a 14,000′ Colorado mountain in a single day” (p. 101). Using a design model developed by Jones et al. (2014), the authors have guided faculty groups through the design of such a course.
“We have found that identifying flaws in the fictional course leads to instructors more easily diagnosing and correcting design flaws in their own courses” (p. 103). For example, this design process starts with student learning factors, and the question to faculty is, “Would you lead students on a hike if they didn’t have appropriate equipment?” Of course not, which leads directly to whether students taught by those in the workshop have the knowledge, skills, and resources that help to ensure successful course completion.
Designing the fictional course helps faculty diagnose three basic course design flaws: missing, ineffective, and misaligned components. What’s often missing from courses has not intentionally been left out. Teachers just haven’t thought about it. Imagine a student who arrives for the hike wearing flip-flops or shares that they a heart condition two days before the trip. Sometimes it’s the learning outcomes that are missing from course designs: Would you take students on a hike and not tell them the destination? Or maybe it’s formative assessment that’s missing: Would you want to find out the level of students’ physical conditioning on the day of the hike?
The second design flaw involves ineffective components. If the learning objective is something like “understand Colorado mountain hiking,” would the students expect to hike a 14,000-foot peak, or might they think they’d be surveying trail locations throughout the state? Maybe the learning experiences aren’t effective; maybe students participate in a workout regimen that lacks the intensity needed to help them hike at high altitude.
Finally, there can be design issues associated with how the course components align. Students need to get in shape for a challenging hike like this, and the instructor prepares them by giving presentation after presentation explaining the importance of getting into shape. So, the students end up being able to explain the various aspects of getting into shape but without any experience doing so.
The fictional course becomes a compelling example that makes clear that coherence is inherently a part of good course design. For many of us who are still beginners at instructional design, it’s easy to cobble a course together, adding features that seem like improvements but compromise its integrity. The result ends up resembling one of those rural Pennsylvania homes that hang together but without any sense of design.
Robinson, S. E., Noyd, R. K. & Jones, S. K. (2021). Helping instructors identify course design flaws. College Teaching, 69(2), 100–106. https://doi.org/10.1080/87567555.2020.1828251
Jones, S. K., Noyd, R. K., & Sagendorf, K. S (2014). Building a pathway for student learning: A how-to guide to course design. Stylus.