What We Know about Online Student Evaluations

Online course evaluations are pretty much the norm now. Fortunately, the switch from in-class to online data collection has generated plenty of research that compares the two. Unfortunately, as is true for course evaluations generally, most faculty and administrators are less cognizant of the research

Read More »

Games as Study Aids

Studies show that many students do a poor job of studying (Miller, 2017). Quite a few just scan the readings again or cram the night before a test in hopes that the information will last until the next day. But neither strategy is especially effective.

Read More »

Self-Discovery in College (and on the Trail)

Many of our students think about college as job preparation. The focus tends to be on the job and not whether it suits their skills and abilities. A lot of students are pretty convinced about what they can’t do but much less certain about their

Read More »

Writing for Wikipedia

The traditional college assignment is seen by the instructor and nobody else. But having students contribute to Wikipedia gives them the pride of knowing that what they are creating will benefit others who use the information. For this reason I assign my upper-division courses to

Read More »

Breaking Free from Content “Coverage”

Most faculty still think of “covering” as something they do to content, and most have lots to cover. I find it hard to be patient and understanding on this topic. We’re past the point where we can teach students everything they need to know about

Read More »

Microlearning with Articulate Storyline and Rise

Microlearning is gaining popularity in education as an alternative to the traditional 45–75-minute lecture because it better matches the neurology of learning. When we encounter new information, it starts in our working memory, which is the memory we use for immediate tasks—a bit like computer

Read More »

Mixed-Reality Virtual Simulation: Where Should You Start?

Recently, the American Association of Colleges for Teacher Education (AACTE) and Mursion, a well-known commercial vendor for virtual reality training, forged a new partnership in the wake of the COVID-19 pandemic (AACTE, 2020). The pandemic has driven this kind of mixed-reality virtual simulation into the

Read More »

Clickers and Problem-Solving: What’s the Latest?

At this point, clickers and other electronic tools that encourage student interaction are accepted instructional practices and commonly used in large courses. What they offer that other instructional strategies don’t is a means for every student to participate. Their effects are also relatively easy to

Read More »

Escape Rooms for Increased Student Engagement

Escape rooms have become a cultural phenomenon over the past few years. Groups of people pay to be put into “locked” rooms they can escape only by solving a series of clues. But now education is starting to use escape rooms in both face-to-face and

Read More »
Archives

Get the Latest Updates

Subscribe To Our Weekly Newsletter

Magna Digital Library
wpChatIcon

Online course evaluations are pretty much the norm now. Fortunately, the switch from in-class to online data collection has generated plenty of research that compares the two. Unfortunately, as is true for course evaluations generally, most faculty and administrators are less cognizant of the research findings than they should be.

For Those Who Teach from Maryellen Weimer

There is one exception: virtually everyone has correctly concluded that response rates dipped when the evaluations went online. Boysen (2016), who’s reviewed much research on the decline, writes that it’s safe to assume that at least 20 percent fewer students complete online evaluations than do face-to-face surveys. That means a sizable dip in the amount of feedback faculty receive. In some cases, it’s enough to compromise the representativeness of the sample—what those in psychometrics call measurement error, which is determined using formulas from sampling theory and is a function of class size and response rate. So, if you’ve got 30 students in the course, the margin of error is 3 percent with a 97 percent response rate. With 100 students in the course, a 21 percent response rate provides a 10 percent margin of error. Debate still exists over an appropriate error percentage for student ratings. Response rate matters, whether the data is used in promotion and tenure decisions or by teachers attempting to improve their instruction.

The pragmatic question is whether teachers can take actions that boost response rates. The prevalence of electronic devices in face-to-face classrooms makes it possible for students to complete online evaluations during class, which ups the response rate. Another approach is to go after the reasons students don’t complete the evaluations: they’re at the end of the course, and won’t reap the benefits of their feedback; they’re asked to evaluate too many courses too often; and they don’t believe their feedback makes any difference. Teachers can’t change some of those reasons, but they can certainly point out course changes they’ve made as a result of rating feedback. They can solicit feedback at points during the course, respond to it, and act on appropriate student suggestions. Finally, there’s considerable evidence that extra credit works, even in very small amounts (e.g., Jaquett et al., 2016). I have wondered about the ethics involved with “counting” completed evaluations in the overall grade calculation, even if the points toward the final grade are trivial. But ethics aside, earning points for doing an evaluation does feed the student assumption that unless it counts, there’s no reason to do it. That’s an assumption more appropriately debunked than supported.

Beyond response rates, there’s the overarching question of how online ratings compare with the face-to-face assessments. There’s lots of faculty chatter about who completes online evaluations and whether a higher percentage of disgruntled students use them to get even with teachers they don’t like. So far, no evidence has emerged showing a difference in the negativity and positivity of online and face-to-face evaluations (e.g., Stowell et al., 2012). In fact, some research found that students with high GPAs are more likely to complete online evaluations than those low ones (e.g., Adams & Umbach, 2012). As for a general conclusion, most research has found that online ratings are not significantly different from face-to-face ratings.

Fortunately, researchers are asking the questions we need answered about online evaluations. Their findings offer the reassurance needed to take those results seriously. Of course, research findings apply in general—they’re true most of the time. It always makes sense to turn an analytic eye toward individual results and, if they’re at odds with research findings, to explore why.

References

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576–591. https://doi.org/10.1007/s11162-011-9240-5

Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273–284. https://doi.org/10.1037/stl0000069

Jaquett, C. M., VanMaaren, V. G., & Williams, R. L. (2016). The effect of extra-credit incentives on student submission of end-of-course evaluations. Scholarship of Teaching and Learning in Psychology, 2(1), 49–61. https://doi.org/10.1037/stl0000052

Stowell, J. R., Adison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37(4), 465–473. https://doi.org/10.1080/02602938.2010.545869


To sign up for weekly email updates from The Teaching Professor, visit this link.