Course Redesign: A Compelling Example

Many of our course revisions happen without much planning. A new idea comes down the pike, an interesting technology option becomes available, a colleague shares a strategy that effectively deals with an issue, and we just use it! So, the course evolves and changes but not all that systematically. As a result, most of the change happens at the margins. That’s not bad, but it does not result in transformational change. It’s not change that heads the course in a whole new direction. That much change makes most faculty nervous. It requires considerable work and often entails using teaching practices outside their current comfort zone.

What we need are good examples of how big changes can be implemented incrementally and iteratively. And the article highlighted here contains such an example.

The upper division neurophysiology course, enrolling between 80 and 110 students, was redesigned over four years (with fine tuning still going on). It started out as a traditional lecture course with familiar problems. The instructor wanted students to apply course concepts in new situations and to develop thinking that was more expert-like, but the students were into memorizing: “In many cases, students could only reproduce information from lecture” (Casagrand & Semsar, 2017, p. 195). To try to improve students’ ability to apply the content, the instructor decided to implement some evidence-based active learning approaches. The course’s content was not changed. The instructor started out by creating homework assignments that gave students practice working with concepts they struggled with on the exams. The home assignments started out short and involved lower order cognitive skills like the vocabulary students needed to answer higher order questions. Students responded positively, and three years later the number and level of questions had increased; the homework was due weekly and counted for 15 percent of the student’s grades.

After the addition of the homework assignments, the instructor started using clicker questions in class once, sometimes twice, per class session. Based on more favorable student response, the instructor increased the number so that they are now used between two and six times per class session and they’re mostly higher order questions.

Perhaps most interesting, to encourage students to work together on homework assignments, the instructor created an informal, optional homework help room that was available for several hours the day before the homework was due. Students could help each other; a TA was available and sometimes the teacher.

As the revision proceeded, the instructor introduced learning goals that made explicit what students were expected to know and to be able to do. As a result of this set of changes, the course exams gradually changed so that by the end of the four-year revision period, only 14 percent of the exam questions remained the same. The content hadn’t changed, but the questions now asked students to demonstrate a conceptual understanding of the material. The questions were also more closely aligned with the homework, clicker questions, and course learning goals. She also worked to maintain exam averages, thereby certifying the rigor of the course content.

Another factor makes this course revision a great example. When the instructor started making these changes, she, like most teachers, wasn’t thinking about how she was going to assess them. She gathered no baseline data. But with this many revisions involving this much work, she really needed to know if the reform was worth the effort. She did have class responses to 12 test questions used before and after the course redesign. She then developed a unique way to use Bloom’s taxonomy to assess changes in the cognitive levels of the exams. The results of this analysis confirmed that these course revisions were well worth the effort. On the exam questions used before and after the revisions, student performance was significantly higher in the post-reform semester, changing from 59 percent to 76 percent. They answered more of the lower and higher order questions correctly. The revised course exams had double the number of higher order questions. There was also improvement for both higher- and lower-performing students.

Student responses to the reforms were positive: “Although students may not be best at recognizing what helps them learn or in assessing their true level of understanding, student attitudes are an indicator of student buy-in to teaching strategies” (Casagrand & Semsar, 2017, p. 200). The most compelling example was student use of the optional help room. Seventy percent reported using it, and of those who used it, 93 percent said they worked with peers while there. Before the revisions, an estimated 10–20 percent of the students came to instructor/TA office hours.

Finally, there’s a thoughtful analysis of what made this revision a success. First, the authors point to the inclusion of multiple active learning strategies and the fact they were implemented incrementally, which allowed them to be improved as the revisions moved forward. Second, the course’s goals and formative and summative assessments were tightly aligned. Students knew what they needed to know and be able to do, and they had the opportunity to practice and receive feedback before the exam. Third, students were on board. They recognized that what they were experiencing in class was improving their learning and their performance. Lastly, the instructor lacked departmental support in the beginning, but as the project moved forward, the department formed an association with a science education initiative project that provided various kinds of support.

It was a best-case scenario with a variety of factors aligning for the success. Unfortunately, for many faculty barriers to implementing these kinds of substantive changes are present. Nonetheless, there is still much to be learned from this example of course redesign.

Reference

Casagrand, J. & Semsar, K. (2017). Redesigning a course to help students achieve higher-order cognitive thinking skills: From goals and mechanics to student outcomes. Advances in Physiology Education, 41(2), 194–202.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon