The arrival of ChatGPT sent shockwaves across academia as articles with titles like “Yes, We Are in a (ChatGPT) Crisis” splashed across higher education media. Reports of students using it to write their papers led to the immediate goal of
I like to read vintage books on college teaching, ones written before the current profusion of pedagogical research that has occurred since 2000. The classic work (at least for me) is McKeachie’s Teaching Tips, first published in 1953 and now in its 14th edition (McKeachie
A recent TED conversation (2023) with motivation researcher Ayelet Fischbach on overcoming procrastination got me thinking about the ebbs and flows of each semester and taking advantage of the key moments during the term to help with student success. This came to life for
The flipped class is one of the hottest topics in the teaching field today. In a traditional course, students get the learning content through an in-class lecture and then work with that content outside of class as homework. The problem is that if students
Evidence shows what many faculty already know: that many students are not doing the assigned readings for their classes. The numbers are striking. Today’s college student spends an average of six to seven hours per week on assigned readings, down from 24 hours in
Feedback on performance is one of the most important factors to learning (Cavalcanti et al., 2021). But feedback need not come only from instructors. Students can learn from getting feedback from other students. It not only improves their work but also teaches them to
If 2007 was a watershed moment, influencing and shaping a new generation that would grow up with iPhones, 2023 may go down as the next big milestone for society and life as we know it (<a href="https://www.jeantwenge.com/generations-book-by-dr-jean-twenge/" target="_blank"
Students often ask teaching faculty to help them make important decisions that will affect their lives in significant ways. “Should I drop this course? Should I pursue teaching or industry? Should I do graduate studies?” These questions can only be answered in the context
Faculty are forever looking for ways to improve performance, and a recent article by Xiao and Hew (2023) explores the possibility of using rewards to do so.
Back in 2008, I took part in a national task force whose goal was to plan for the future of the teaching of psychology. I led a group of faculty considering how teaching methods and approaches would change and evolve. As an opening activity,
The arrival of ChatGPT sent shockwaves across academia as articles with titles like “Yes, We Are in a (ChatGPT) Crisis” splashed across higher education media. Reports of students using it to write their papers led to the immediate goal of keeping students away from AI.
There is also a growing understanding that students will use AI in their future work, and as higher education is meant to prepare students for the future, it would do better to teach students how to use it than adopt the Luddite position of forbidding its use. AI is just another tool to assist humans in their endeavors. It is like the ship’s computer on Star Trek, which would answer questions to provide the crew with valuable information for decision making. That is how the tool is being used now and will be used in the future. For instance, astronomers use it to scan images of millions of stars to find anomalies. Now that the initial shock has abated, we can take a more levelheaded look at the real dangers of AI and how to incorporate it into assignments that prepare students for the world that they will enter.
What is the real danger of AI?
Higher education has two main worries about AI. One, students can use it to write papers, making plagiarism easier. Two, it might give students false information. Each is a bit of a red herring.
First, there are AI checkers, such as AI Detector, through which instructors can run student work. A lot been said about the fact that these checkers are not perfect, but neither are ordinary plagiarism checkers like Turnitin, and that has not created similar hand-wringing in academia. Accuracy claims for these detectors range from 95 to 99 percent, and I personally found AI Detector remarkably accurate with some test cases.
But the point is that the situation is no different from ordinary plagiarism. There are ways to fool Turnitin, just as there are ways to fool AI detectors. There is nothing we can do about that other that have institutional policies against plagiarism and do our best to detect it. We have laws against murder and police to investigate it, but people still kill, and we go about our business despite this fact. Higher education needs to do the same. The possibility of plagiarism says nothing about whether we should assign students to use AI, just as the possibility of ordinary plagiarism has not stopped us from giving students writing assignments.
As for accuracy, there seems to be a widespread assumption that AI-generated information must be wrong because it draws from the unlettered masses rather than ivory-tower sages, but I have done some test queries and found the results remarkably accurate. Plus, plenty of academic articles have been found to contain incorrect information or are outright fraudulent, and the Wisdom of Crowds is the proven fact that for certain types of questions the aggregate answer of a large group of amateurs will be more accurate than that of a small group of professionals.
We insist that students cite sources for any factual claims, and if the AI system they use does not provide a source, then students need to find one with that information if they are to use it in their work. Note that Google is currently experimenting with an AI system that does provide sources, as seen in the screenshot below. Faculty can recommend that students use it for their research.
Higher education has moved away from having students memorize information on grounds that there simply is too much information to memorize. We now teach information literacy, which is knowledge of how to find information using available tools. AI is just the latest advancement in information retrieval, and higher education needs to focus on teaching how to use it.
Plus, as finding information gets easier and easier, learning how to evaluate and apply it becomes more and more important. Faculty should focus assignments more on critical thinking.
AI assignments
Rather than try to delineate all the various assignments that can use AI, it is easier to put them into categories for faculty to use as they wish. Here are two such categories.
Research on AI
This kind of assignment makes AI itself the focus. An instructor can assign students to choose a class topic and ask an AI system to answer a question about it, such as the example in the screenshot above about the ethical issues with genetics. Students would then evaluate the answer by comparing it with other sources. They would answer questions like the following:
How comprehensive is the answer? What topics were left out?
How accurate is the answer? Was some information wrong, and if so, which information?
Did the answer represent any biases?
The instructor can also require students to ask the same question in different ways and evaluate how the answers differ. In this way, students learn how an AI system interprets a question and produces results. That knowledge will inform how they use AI systems in the future. Plus, they are learning about the topic through their use of AI responses and comparative research on those responses.
AI as the starting point for research
A second assignment type is for students to use AI to gain an overview of the topic and then pursue it in more detail with focused resources, similar to how Wikipedia is already used. Here students pick a topic and ask a couple of AI systems (e.g., ChatGPT and Google Bard) a question about it so they get a range of answers. They combine the answers to get the lay of the land on that topic and then build their work from other resources on the topic.
For both assignments, students submit the AI results of their query and the product that they created. This allows instructors to distinguish student thinking from the AI output.
AI evaluation of student work
Besides research, students can use AI to generate feedback on their work. The feedback ChatGPT provides focuses on general writing topics, such as composition and detail. It will not provide much feedback on substantive issues, such as factual errors or missing topics. But this is a good way for students to improve the clarity of their writing before submitting it to the instructor. See the first part of feedback ChatGPT provided on a sample student work below:
Instructors can encourage students to use Grammarly or the internal writing checker on their word processing program to address simple writing mistakes in grammar and spelling and then submit the work to an AI system to improve the clarity of the writing. This will free up instructor time from working on writing errors and allow them to focus on the thinking issues that they would rather discuss anyway. This use of AI is not much different from students doing peer reviews, which instructors have learned improves student work. It also provides students with skills that they can apply to their future work.
These are just a few ways to teach students about how to use AI in their work. Undoubtedly, more will come as systems develop. But in the end AI is just another tool, and the job of higher education is to teach students how to use it to be more successful in the future.