What We Know about Online Student Evaluations

Credit: iStock.com/Warchi
Credit: iStock.com/Warchi

Online course evaluations are pretty much the norm now. Fortunately, the switch from in-class to online data collection has generated plenty of research that compares the two. Unfortunately, as is true for course evaluations generally, most faculty and administrators are less cognizant of the research findings than they should be.

For Those Who Teach from Maryellen Weimer

There is one exception: virtually everyone has correctly concluded that response rates dipped when the evaluations went online. Boysen (2016), who’s reviewed much research on the decline, writes that it’s safe to assume that at least 20 percent fewer students complete online evaluations than do face-to-face surveys. That means a sizable dip in the amount of feedback faculty receive. In some cases, it’s enough to compromise the representativeness of the sample—what those in psychometrics call measurement error, which is determined using formulas from sampling theory and is a function of class size and response rate. So, if you’ve got 30 students in the course, the margin of error is 3 percent with a 97 percent response rate. With 100 students in the course, a 21 percent response rate provides a 10 percent margin of error. Debate still exists over an appropriate error percentage for student ratings. Response rate matters, whether the data is used in promotion and tenure decisions or by teachers attempting to improve their instruction.

The pragmatic question is whether teachers can take actions that boost response rates. The prevalence of electronic devices in face-to-face classrooms makes it possible for students to complete online evaluations during class, which ups the response rate. Another approach is to go after the reasons students don’t complete the evaluations: they’re at the end of the course, and won’t reap the benefits of their feedback; they’re asked to evaluate too many courses too often; and they don’t believe their feedback makes any difference. Teachers can’t change some of those reasons, but they can certainly point out course changes they’ve made as a result of rating feedback. They can solicit feedback at points during the course, respond to it, and act on appropriate student suggestions. Finally, there’s considerable evidence that extra credit works, even in very small amounts (e.g., Jaquett et al., 2016). I have wondered about the ethics involved with “counting” completed evaluations in the overall grade calculation, even if the points toward the final grade are trivial. But ethics aside, earning points for doing an evaluation does feed the student assumption that unless it counts, there’s no reason to do it. That’s an assumption more appropriately debunked than supported.

Beyond response rates, there’s the overarching question of how online ratings compare with the face-to-face assessments. There’s lots of faculty chatter about who completes online evaluations and whether a higher percentage of disgruntled students use them to get even with teachers they don’t like. So far, no evidence has emerged showing a difference in the negativity and positivity of online and face-to-face evaluations (e.g., Stowell et al., 2012). In fact, some research found that students with high GPAs are more likely to complete online evaluations than those low ones (e.g., Adams & Umbach, 2012). As for a general conclusion, most research has found that online ratings are not significantly different from face-to-face ratings.

Fortunately, researchers are asking the questions we need answered about online evaluations. Their findings offer the reassurance needed to take those results seriously. Of course, research findings apply in general—they’re true most of the time. It always makes sense to turn an analytic eye toward individual results and, if they’re at odds with research findings, to explore why.

References

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576–591. https://doi.org/10.1007/s11162-011-9240-5

Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273–284. https://doi.org/10.1037/stl0000069

Jaquett, C. M., VanMaaren, V. G., & Williams, R. L. (2016). The effect of extra-credit incentives on student submission of end-of-course evaluations. Scholarship of Teaching and Learning in Psychology, 2(1), 49–61. https://doi.org/10.1037/stl0000052

Stowell, J. R., Adison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37(4), 465–473. https://doi.org/10.1080/02602938.2010.545869


To sign up for weekly email updates from The Teaching Professor, visit this link.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

Online course evaluations are pretty much the norm now. Fortunately, the switch from in-class to online data collection has generated plenty of research that compares the two. Unfortunately, as is true for course evaluations generally, most faculty and administrators are less cognizant of the research findings than they should be.

For Those Who Teach from Maryellen Weimer

There is one exception: virtually everyone has correctly concluded that response rates dipped when the evaluations went online. Boysen (2016), who’s reviewed much research on the decline, writes that it’s safe to assume that at least 20 percent fewer students complete online evaluations than do face-to-face surveys. That means a sizable dip in the amount of feedback faculty receive. In some cases, it’s enough to compromise the representativeness of the sample—what those in psychometrics call measurement error, which is determined using formulas from sampling theory and is a function of class size and response rate. So, if you’ve got 30 students in the course, the margin of error is 3 percent with a 97 percent response rate. With 100 students in the course, a 21 percent response rate provides a 10 percent margin of error. Debate still exists over an appropriate error percentage for student ratings. Response rate matters, whether the data is used in promotion and tenure decisions or by teachers attempting to improve their instruction.

The pragmatic question is whether teachers can take actions that boost response rates. The prevalence of electronic devices in face-to-face classrooms makes it possible for students to complete online evaluations during class, which ups the response rate. Another approach is to go after the reasons students don’t complete the evaluations: they’re at the end of the course, and won’t reap the benefits of their feedback; they’re asked to evaluate too many courses too often; and they don’t believe their feedback makes any difference. Teachers can’t change some of those reasons, but they can certainly point out course changes they’ve made as a result of rating feedback. They can solicit feedback at points during the course, respond to it, and act on appropriate student suggestions. Finally, there’s considerable evidence that extra credit works, even in very small amounts (e.g., Jaquett et al., 2016). I have wondered about the ethics involved with “counting” completed evaluations in the overall grade calculation, even if the points toward the final grade are trivial. But ethics aside, earning points for doing an evaluation does feed the student assumption that unless it counts, there’s no reason to do it. That’s an assumption more appropriately debunked than supported.

Beyond response rates, there’s the overarching question of how online ratings compare with the face-to-face assessments. There’s lots of faculty chatter about who completes online evaluations and whether a higher percentage of disgruntled students use them to get even with teachers they don’t like. So far, no evidence has emerged showing a difference in the negativity and positivity of online and face-to-face evaluations (e.g., Stowell et al., 2012). In fact, some research found that students with high GPAs are more likely to complete online evaluations than those low ones (e.g., Adams & Umbach, 2012). As for a general conclusion, most research has found that online ratings are not significantly different from face-to-face ratings.

Fortunately, researchers are asking the questions we need answered about online evaluations. Their findings offer the reassurance needed to take those results seriously. Of course, research findings apply in general—they’re true most of the time. It always makes sense to turn an analytic eye toward individual results and, if they’re at odds with research findings, to explore why.

References

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576–591. https://doi.org/10.1007/s11162-011-9240-5

Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273–284. https://doi.org/10.1037/stl0000069

Jaquett, C. M., VanMaaren, V. G., & Williams, R. L. (2016). The effect of extra-credit incentives on student submission of end-of-course evaluations. Scholarship of Teaching and Learning in Psychology, 2(1), 49–61. https://doi.org/10.1037/stl0000052

Stowell, J. R., Adison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37(4), 465–473. https://doi.org/10.1080/02602938.2010.545869


To sign up for weekly email updates from The Teaching Professor, visit this link.