What if a student was having trouble understanding an assignment, and they asked a smart friend to help them? This person had not taken the class but was able to get the student started on the assignment or at least unpack the instructions a little bit. Parents do this often when their children are struggling with an assignment. They know well to not provide the answer but help to get their children on the right track. Smart friends may not always play by the same rules but can be a great aid nonetheless. The informational and emotional support that friends and parents provide can prevent frustration, alleviate flagging motivation, and aid learning if they don’t just hand over the solutions. After all, the assignment needs to be the students’ work as it was intended to develop skills or content knowledge.
In many ways, ChatGPT and artificial intelligence (AI) can be seen as smart friends. Yes, AI is an extremely smart friend who knows more than most smart friends you or I will have. AI is also much faster, less prone to bias (though not completely free of it), and much more readily available than the average smart friend. Getting aid from AI is not much more or less problematic than getting help from a smart friend. How, then, is AI different from any real smart friend, and what do these differences mean for higher education? We need to consider two major questions and get ready for sea change.
How much do you care about cheating?
Let’s move from assignments to tests. Few if any parents would give their students the answers on a test. No smart friends should either. But of course, this is not always the case. Well-meaning parents and friends may assist students more than they should. Who is to blame? The student who asks and the person who provides the answers share responsibility. Technology has now taken others out of the equation, and students can ask AI such as ChatGPT for help on assignments and tests. AI is efficient, and at least at this point, the aid is hard to detect. Although using AI is in many ways like using a real live smart friend, higher education is much more in a tizzy about ChatGPT-aided cheating than it ever was about smart friends.
The bilious response to cheating has a long history. In my lifetime, students have clandestinely brought materials into an in-person exam to use; used notes, books, or classmates when they were told not to; and even paid others to do the work for them. There are academic consequences for each of these cases of academic dishonesty. Universities have clear punishments for violations of the honor code and clear policies and procedures for invoking them. The use of AI is harder to detect but can and should be subjected to the same consequences if it constitutes academic dishonesty (did the instructor say AI use was not allowed?). There is also a long line of research showing the utility of having students sign statements signifying they have not cheated (e.g., Gurung & Wilhelm, 2012). It may be a good time to use such honor codes for assignments and exams making clear when AI is allowed (if it is).
As I grapple with how I should change my teaching or what more I can do to prevent the misuse of AI, there are four realities that alleviate many of my concerns:
- Just because they can cheat or use AI does not mean they will.
- Just because they can cheat or use AI does not mean they will do better.
- Just because they do use AI does not mean they are not learning.
- Just because they are not using AI does not mean they are learning.
Each of these realities needs empirical testing, and this kind of scholarship of teaching and learning has only just begun. In the meantime, we must grapple with what I see as the more important question.
What will AI use do to learning?
Yoked to the indignation that comes from knowing a student could have engaged in academic dishonesty is the realization that a learning goal has not been met. Were we to leave the age-old issue of cheating aside, granting that it is bad and students should not engage in it, what about the question of learning? Is it true that the use of AI will hurt learning? This is an empirical question in need of an answer. At one level, it seems like it must negatively affect learning.
If I ask a student to write an essay, and the student uses ChatGPT, I am grading the AI and not the student. The student may have learned something if they read the output as they copy-pasted it, but there is a good chance that they did not. At another level, it is possible that the use of AI provided the student with the impetus to learn when they would otherwise have been stuck.
Many students report not being able to even start an assignment. They believe they do not know enough. They may not fully understand the instructions. They simply dislike it and procrastinate, a behavior that has been linked to anxiety over starting and the fear of failure. Many of us teachers look at our instructions and think they are clear, but this belief is often skewed by the bias of expertise. Just as providing students with a practice test can reduce anxiety over an exam as it gives them a sense of what the exam will look like, making exemplars of assignments can likewise reduce students’ anxiety and help them get started. AI such as ChatGPT does just this.
Idiosyncratic learning outcomes still reign supreme
Guidelines for coping with AI abound, but as we continue to watch technological advancements, there is a bigger question higher education should return to: What are the skills and content we want our students to know?
I have eight writing examples in my introductory psychology class. I put all of them into ChatGPT, and the AI provided mostly satisfactory answers on about half of them. They were mostly satisfactory and only on half of them because each of my assignments is directly tied to and alludes to my course learning outcomes. ChatGPT does not know my course learning outcomes. The students in my class should (they are on the syllabus and we review them every time I discuss the assignments). While I do not currently provide assignment exemplars, I noted that the ChatGPT responses would be great to scaffold student learning, and I will include directions on how to use AI the next time I teach the class. In addition, I will further modify my assignments to link them even more directly to my learning outcomes.
I will also face a critical reality. In fact, all of us in higher education must. Given what AI can do, we can no longer have our students do the same things we have had them do for the past 25 years. Education needs to evolve to accommodate the affordances provided by technological advancements. I am excited to change what I do to better challenge my students for deeper learning. Let’s do it together.
Regan A. R. Gurung, PhD, is associate vice provost and executive director for the Center for Teaching and Learning and professor of psychological science at Oregon State University. Follow him on Twitter @ReganARGurung.