Do you remember life before smartphones?
If 2007 was a watershed moment, influencing and shaping a new generation that would grow up with iPhones, 2023 may go down as the next big milestone for society and life as we know it (Twenge, 2023). The dawn of the new year marked the arrival of a new player in the nation’s consciousness. Hello, AI language models and ChatGPT!
Catapulting onto the higher education stage with the new release of OpenAI’s ChatGPT 3.0., the best-known AI tool and what I focus on for now, tech-savvy faculty quickly saw the writing on the wall (Scott, 2023). The affordances of this effortlessly available AI language model could make a variety of tasks, from mundane emails to complex papers, easier to start and complete. Many professionals savored how entering a prompt into ChatGPT provided the antidote to the poison of absent muses and procrastination.
Some faculty quickly modified their course assignments in attempts to be AI-proof. Sure enough, making instructions more specific to a class made ChatGPT responses significantly less accurate. This peculiarizing would only go so far. New versions of ChatGPT seemed more immune to such attempts to thwart the veracity of output in the name of good academic conduct.
The reality is that few of us can be sure how good or how bad AI is going to get. We can be sure that AI will underlie more of the processes that we use all the time. We can also hazard a guess that advances in AI technology will radically change the nature of what people do at work and how they do it. In the face of uncertainty, it is important to know how the new developments in AI should be viewed in higher education. Just as there are guides to critical thinking that help us separate fake news from true (e.g., Is the source credible?) or a good survey from a poor one (e.g., Is it valid and reliable?), we should have a handy guide for our students and for faculty to evaluate the use of AI. University recommendations are proliferating. It is time to get our own FEAL for AI and ask four main questions as we consider using it.
F: Will the task be faster?
At first blush, this seems like an easy win for AI. Enter a prompt into AI, and it spits out an answer in a few seconds. While this seems extremely convenient, the fast output can be deceiving. Often, one must add more prompts to get a better answer or solve the issue. The language used may not be how we would write. The output may be tangential. This prompting and review of new output is also made more difficult by the fact that ChatGPT’s access is limited to what is in public domains (and not behind paywalls or on individual faculty members’ servers). The result is that in its almost human drive to please, ChatGPT will “hallucinate” and provide information that is not factual. Checking output and adding prompts adds time to what once seemed like a fast process. In some cases, it may be faster to do the task oneself.
E: Is it ethical?
Most colleges and universities will start new academic years with faculty updating student academic misconduct statements to address the use of AI. A good immediate step is for instructors to make clear whether AI use is acceptable. Faculty may also want to set limits on how much use is permissible. Is it all right for a student to use ChatGPT to get started if they then edit and complete the assignment themselves? If the student provides all the prompts used and a transcript of their interaction with ChatGPT, can they use it in whichever way they wish? While it may seem obvious, knowing whether the use of AI is allowed or not is an important check. Furthermore, correctly citing AI is an essential part of ethical use.
A: Is it accurate?
The level of hallucinating is surprising. In addition to providing inaccurate facts about a person (ask it to write your biography and see whether it gets you right), ChatGPT has also been known to make up citations. It is good enough that a cursory glance suggests perfect formatting, but a close look will show a conglomeration of various sources. I have seen numerous APA-style citations that are perfect on style but nonexistent in reality. Worse, the authors used were people who worked on the topic, and the source journal existed. This makes detecting hallucinating AI much harder work. Especially in classes where the learning outcome is to be able to accurately summarize a body of literature with full citations, ChatGPT may not deliver an A paper. Checking for accuracy is critical, and novices to a topic, whether faculty or student, may turn in grossly flawed work if they rely on AI.
L: Will I learn?
Although skeptics and cynics may complain that students will turn a blind eye on ethical issues and ignore accuracy concerns if they can get the job done fast, this would be an overgeneralization. If a student uses ChatGPT to do an assignment and pays little attention to the output before turning it in, they are unlikely to have learned the skills that the assignment was designed to foster. If they use ChatGPT to generate examples to use as models or inspiration for their own work, learning could be taking place, though this is an empirical question that needs to be tested (my lab is, so check back in some time). Using an abacus speeds up calculation time but still required a user to know how to compute mathematical functions and was a tool that facilitated learning math skills. ChatGPT could also be a tool that can help students learn, but faculty need to reflect on how best to do this.
FEAL it out. Then act.
AI is not going anywhere. Instead of putting our heads in the sand and hoping someone else will take care of the issue for us, we should further develop our critical thinking skills regarding how we can best use this transformative technology.
Regan A. R. Gurung, PhD, is associate vice provost and executive director for the Center for Teaching and Learning and professor of psychological science at Oregon State University. His latest book is Study Like a Champ. Follow him on Twitter @ReganARGurung.