Do Online Students Cheat More on Tests?

A lot of faculty worry that they do. Given the cheating epidemic in college courses, why wouldn’t students be even more inclined to cheat in an unmonitored exam situation? Add to that how tech-savvy most college students are. Many know their way around computers and software better than their professors. Several studies report that the belief that students cheat more on online tests is most strongly held by faculty who’ve never taught an online course. Those who have taught online are less likely to report discernible differences in cheating between online and face-to-face courses. But those are faculty perceptions, not hard, empirical evidence.

Study author Beck correctly notes that research on cheating abounds, and it addresses a wide range of different questions and issues. Faculty have been asked how often they think cheating occurs and what they do about it when it happens. Students have been asked whether they or their colleagues would cheat, given a certain set of circumstances. Student have been asked how often they cheat, how often they think their colleagues do, and whether they report cheating. The problem with much of this descriptive research is that it summarizes perceptions, what faculty and students think and have experienced with respect to cheating. And this in part explains why the results vary widely (studies report cheating rates anywhere between 9 and 95 percent) and are sometimes contradictory and therefore inconclusive.

Beck opted to take a different approach in her study of cheating in online and face-to-face classes. She used a statistical model to predict academic dishonesty in testing. It uses measures of “human capital” (GPA and class rank, for example) to predict exam scores. “This model proposes that the more human capital variables explain variation in examination scores, the more likely the examination scores reflect students’ abilities and the less likely academic dishonesty was involved in testing.” (p. 65) So if a student has a high GPA and is taking a major course, the assumption is that the student studied, cares about the course, and therefore earned the grade. But if a student has a low GPA and doesn’t care about the course and ends up with a high exam score, chances are the student cheated. It’s an interesting method with a good deal more complexity than described here. The article includes full details of the assumptions and how the model was developed and used.

The study looked at exam scores (midterms and finals, all containing the same questions) of students in three sections of the same course. One section contained an online unmonitored exam, another was an online hybrid section with a monitored exam (students took this exam in a testing center facility), and the third was a face-to-face section with the test monitored by the instructor. In the online unmonitored section, questions were randomized so that each student received a unique test. Online students could not exit or restart an exam once they began taking it. The exam was presented to them one question at a time, they could not move backward through the questions, and the exam was automatically submitted after 70 minutes, the time allowed in the other two formats. Students in all sections were warned not to engage in cheating.

“Based on the results in this study, students in online courses, with unmonitored testing, are no more likely to cheat on an examination than students in hybrid and F2F courses using monitored testing, nor are students with low GPAs more likely to enroll in online courses.” (p. 72) Some had suggested that because students who had not taken an online course reported that they thought it would be easier to cheat in online courses, students with lower GPAs might be motivated to take online courses. There were only 19 students in the online course in this study, but across these three sections, GPA did not differ significantly.

Using this interesting model to predict cheating, there was no evidence that it occurred to a greater degree in the unmonitored tests given in the online course. That’s the good news. The bad news: “There is ample opportunity for cheating across all types of course delivery modes, which has been demonstrated through decades of research.” (p. 73) In other words, we still have a problem, it just isn’t more serious in online courses, based on these results.

Reference:

Beck, V. (2014). Testing a model to predict online cheating—Much ado about nothing. Active Learning in Higher Education 15 (1), 65–75.

Maryellen Weimer is the editor of The Teaching Professor.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
A lot of faculty worry that they do. Given the cheating epidemic in college courses, why wouldn't students be even more inclined to cheat in an unmonitored exam situation? Add to that how tech-savvy most college students are. Many know their way around computers and software better than their professors. Several studies report that the belief that students cheat more on online tests is most strongly held by faculty who've never taught an online course. Those who have taught online are less likely to report discernible differences in cheating between online and face-to-face courses. But those are faculty perceptions, not hard, empirical evidence. Study author Beck correctly notes that research on cheating abounds, and it addresses a wide range of different questions and issues. Faculty have been asked how often they think cheating occurs and what they do about it when it happens. Students have been asked whether they or their colleagues would cheat, given a certain set of circumstances. Student have been asked how often they cheat, how often they think their colleagues do, and whether they report cheating. The problem with much of this descriptive research is that it summarizes perceptions, what faculty and students think and have experienced with respect to cheating. And this in part explains why the results vary widely (studies report cheating rates anywhere between 9 and 95 percent) and are sometimes contradictory and therefore inconclusive. Beck opted to take a different approach in her study of cheating in online and face-to-face classes. She used a statistical model to predict academic dishonesty in testing. It uses measures of “human capital” (GPA and class rank, for example) to predict exam scores. “This model proposes that the more human capital variables explain variation in examination scores, the more likely the examination scores reflect students' abilities and the less likely academic dishonesty was involved in testing.” (p. 65) So if a student has a high GPA and is taking a major course, the assumption is that the student studied, cares about the course, and therefore earned the grade. But if a student has a low GPA and doesn't care about the course and ends up with a high exam score, chances are the student cheated. It's an interesting method with a good deal more complexity than described here. The article includes full details of the assumptions and how the model was developed and used. The study looked at exam scores (midterms and finals, all containing the same questions) of students in three sections of the same course. One section contained an online unmonitored exam, another was an online hybrid section with a monitored exam (students took this exam in a testing center facility), and the third was a face-to-face section with the test monitored by the instructor. In the online unmonitored section, questions were randomized so that each student received a unique test. Online students could not exit or restart an exam once they began taking it. The exam was presented to them one question at a time, they could not move backward through the questions, and the exam was automatically submitted after 70 minutes, the time allowed in the other two formats. Students in all sections were warned not to engage in cheating. “Based on the results in this study, students in online courses, with unmonitored testing, are no more likely to cheat on an examination than students in hybrid and F2F courses using monitored testing, nor are students with low GPAs more likely to enroll in online courses.” (p. 72) Some had suggested that because students who had not taken an online course reported that they thought it would be easier to cheat in online courses, students with lower GPAs might be motivated to take online courses. There were only 19 students in the online course in this study, but across these three sections, GPA did not differ significantly. Using this interesting model to predict cheating, there was no evidence that it occurred to a greater degree in the unmonitored tests given in the online course. That's the good news. The bad news: “There is ample opportunity for cheating across all types of course delivery modes, which has been demonstrated through decades of research.” (p. 73) In other words, we still have a problem, it just isn't more serious in online courses, based on these results. Reference: Beck, V. (2014). Testing a model to predict online cheating—Much ado about nothing. Active Learning in Higher Education 15 (1), 65–75. Maryellen Weimer is the editor of The Teaching Professor.