Put Function Over Form when Designing Online Courses

function over form - online course design

Faculty tend to lament that student evaluations are just opportunities for students with bad grades to hammer their instructor. But I always believed that student surveys where actually designed to inflate approval ratings. They always start by asking simple class format questions, such as “Did the instructor hold regular classes?” How many instructors would fail that one? The sequence of format questions gets the student into the rhythm of giving high scores so that by the time the student reaches a real question on the quality of learning, he or she thinks to themselves, “Gee, coming in here I thought this teacher was really terrible, but after giving so many high scores, I guess maybe they’re OK.”

Students do not care about the format of a course, they care about the function, meaning how well they learned from it. No student tells their friend “You should take Jones because he always holds regular classes.” They care about aspects related to learning, such as whether the lectures were clear and engaging. Learning is in a course’s function, not its format.

Despite this fact, institutions tend to restrict their instructional designers to a grocery list of recommended format elements, such as “a 2-3 page introduction, a 5-8 page assignment, two weekly discussion questions.” Instructional designers then pass this “format first” thinking to the instructors building courses, leaving no real evaluation of whether the content will produce real learning.

I recently taught an online course developed by another instructor. It was approved because it met the usual course design standards, but I could tell immediately that it was not a good learning tool. Readings where either too superficial or too complex. Assessments were so broad that they could not be done according to directions. Not surprisingly, students were confused, even through it checked all of the usual course design criteria.

A simple fix to this problem is to have someone go through the course material from the standpoint of a student, asking the simple question “Did I learn?” This simple, yet powerful question, is not asked enough in higher education. I briefly worked in corporate training and after one training session had a senior director complain to me that the session did not include a green three-ring binder for each participant, and without it the session could not have been real training. Here again format meant more than function to him. By contrast, I threw out the usual 20 question post-training evaluation in favor of one with only a few questions, including: “Did you learn anything from the training, and if so, what did you learn?”

Some institutions use fellow faculty from the instructor’s department to evaluate courses, but this is a poor measure of learning because faculty suffer from the “Expert’s Blindspot,” the inability of an expert to understand the problems of a novice because they do not see the world the same way the novice does. The expert draws on knowledge that the novice does not have to make sense of new material, and often the expert does not realize that the novice lacks this knowledge. I see many cases were faculty think an assigned article is appropriate and effective, when in fact it is above the level of most students.

This is why someone closer to the student’s perspective should evaluate whether the course produces learning. That evaluator need not be the instructional designer, it could be anyone, but the instructional designer is an ideal choice because he or she can both see problems and suggest solutions. The instructional designer can point out that certain parts of a course video are unclear, or that a reading did not help him or her understand the course concepts.

Giving this job to the instructional designer also helps make him or her a more genuine partner in the course development process. Instructional designers want to be involved in content creation—they have many good ideas about content—but their grocery list directive tends to reduce them to merely loading into the LMS whatever they are given. They are focused on whether the content is in the correct file format to work in the LMS. They look for meaningless format features like whether the school’s logo is on every slide of a PowerPoint Presentation, and not whether the presentation was effective, or whether some other type of content would be more effective. An instructional designer should be versed in the principles of effective communication in an online environment. Maybe the instructional designer with that mindset would suggest using a greenscreen video for an introduction to a unit because the content lends itself to images that the instructor can point to in a greenscreen video.

Institutions overlook a major instructional resource when they restrict their instructional designers to enforcing format rules for online courses. Plus, every instructional designer I have met wants to get more involved in actually designing instruction, rather than just loading something that has already been made. Broadening their duties is a win-win for all.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
[dropcap]F[/dropcap]aculty tend to lament that student evaluations are just opportunities for students with bad grades to hammer their instructor. But I always believed that student surveys where actually designed to inflate approval ratings. They always start by asking simple class format questions, such as “Did the instructor hold regular classes?” How many instructors would fail that one? The sequence of format questions gets the student into the rhythm of giving high scores so that by the time the student reaches a real question on the quality of learning, he or she thinks to themselves, “Gee, coming in here I thought this teacher was really terrible, but after giving so many high scores, I guess maybe they’re OK.” Students do not care about the format of a course, they care about the function, meaning how well they learned from it. No student tells their friend “You should take Jones because he always holds regular classes.” They care about aspects related to learning, such as whether the lectures were clear and engaging. Learning is in a course’s function, not its format. Despite this fact, institutions tend to restrict their instructional designers to a grocery list of recommended format elements, such as “a 2-3 page introduction, a 5-8 page assignment, two weekly discussion questions.” Instructional designers then pass this “format first” thinking to the instructors building courses, leaving no real evaluation of whether the content will produce real learning. I recently taught an online course developed by another instructor. It was approved because it met the usual course design standards, but I could tell immediately that it was not a good learning tool. Readings where either too superficial or too complex. Assessments were so broad that they could not be done according to directions. Not surprisingly, students were confused, even through it checked all of the usual course design criteria. A simple fix to this problem is to have someone go through the course material from the standpoint of a student, asking the simple question “Did I learn?” This simple, yet powerful question, is not asked enough in higher education. I briefly worked in corporate training and after one training session had a senior director complain to me that the session did not include a green three-ring binder for each participant, and without it the session could not have been real training. Here again format meant more than function to him. By contrast, I threw out the usual 20 question post-training evaluation in favor of one with only a few questions, including: “Did you learn anything from the training, and if so, what did you learn?” Some institutions use fellow faculty from the instructor’s department to evaluate courses, but this is a poor measure of learning because faculty suffer from the “Expert’s Blindspot,” the inability of an expert to understand the problems of a novice because they do not see the world the same way the novice does. The expert draws on knowledge that the novice does not have to make sense of new material, and often the expert does not realize that the novice lacks this knowledge. I see many cases were faculty think an assigned article is appropriate and effective, when in fact it is above the level of most students. This is why someone closer to the student’s perspective should evaluate whether the course produces learning. That evaluator need not be the instructional designer, it could be anyone, but the instructional designer is an ideal choice because he or she can both see problems and suggest solutions. The instructional designer can point out that certain parts of a course video are unclear, or that a reading did not help him or her understand the course concepts. Giving this job to the instructional designer also helps make him or her a more genuine partner in the course development process. Instructional designers want to be involved in content creation—they have many good ideas about content—but their grocery list directive tends to reduce them to merely loading into the LMS whatever they are given. They are focused on whether the content is in the correct file format to work in the LMS. They look for meaningless format features like whether the school’s logo is on every slide of a PowerPoint Presentation, and not whether the presentation was effective, or whether some other type of content would be more effective. An instructional designer should be versed in the principles of effective communication in an online environment. Maybe the instructional designer with that mindset would suggest using a greenscreen video for an introduction to a unit because the content lends itself to images that the instructor can point to in a greenscreen video. Institutions overlook a major instructional resource when they restrict their instructional designers to enforcing format rules for online courses. Plus, every instructional designer I have met wants to get more involved in actually designing instruction, rather than just loading something that has already been made. Broadening their duties is a win-win for all.