ChatGPT or: How I Learned to Stop Worrying and Love Generative AI

Credit: iStock.com/Iurii Motov
Credit: iStock.com/Iurii Motov

“At the moment, it’s looking a lot like the end of essays as an assignment for education.”
—Lilian Edwards (as cited in Stokel-Walker, 2022)

The recent emergence of ChatGPT has hit higher education like a lightning bolt, leaving many educators trying to process its implications. ChatGPT is a free website that allows users to type in questions and get a remarkably well-written response that looks like a college essay. These can be either research topics or creative writing; it will even write a poem for you on a topic you give it. Not surprisingly, students are already submitting work created by it for their assignments (Stokel-Walker, 2022).

While prior versions of artificial intelligence tended to produce clunky text with obvious factual or writing errors, including the early version of ChatGPT, the recently released update is generally free of writing errors and other obvious indications that the response is machine written. Consider the reply I received below when I asked it, “What are the ethical issues with genetic engineering?”

Chat GPT replies to the question as follows: 

"Genetic engineering, also known as genetic modification, refers to the process of manipulating an organism's DNA in order to introduce or alter specific traits or characteristics. This technology has the potential to bring about significant benefits, such as the development of new medical treatments and the production of crops that are resistant to pests and diseases. However, it also raises a number of ethical concerns.

One ethical issue with genetic engineering is the potential for unequal access to and distribution of its benefits. If the technology is only available to certain individuals or groups, it could perpetuate or exacerbate existing inequalities.

Another ethical concern is the potential for unintended consequences of genetic modification. For example, genetically modified organisms (GMOs) could have unexpected effects on the environment or other species. There is also the possibility that genetically modified organisms could escape from containment and spread their modified genes to other organisms, which could have long-term and potentially unforseen consequences. 

There are also ethical concerns about the use of genetic engineering for non-therapeutic purposes, such as the creation of genetically modified animals for food or other commercial purposes. Some people argue that these uses of genetic engineering are unethical because they may involve the exploitation or mistreatment of animals.

Finally, there are concerns about the potential for genetic engineering to be used for unethical purposes, such as the creation of genetically modified humans or the use of genetic engineering to enhance certain traits in people. These uses of genetic engineering raise questions about the ethical limits of science and the potential risks and unintended consequences of such interventions.

Overall, the ethical issues surrounding genetic engineering are complex and multifaceted, and they require careful consideration and ongoing dialogue among scientists, policymakers, and the general public.

The writing is polished and the answers accurate, though they are somewhat superficial in that I would expect students to elaborate on each of these points. The system, however, is also designed to allow for a dialogue with the user, who can ask pointed questions about elements of the response to get further information. It would not take long for a student to get a competent college essay from the system.

Reaction

Not surprisingly, the system has caused trepidation among educators who fear it is yet another way for students to cheat. Plagiarism detectors like Turnitin compare submitted work to a database of published and submitted work, but ChatGPT creates the response from scratch, leaving no database against which these tools can check.

GPT-2 Output Detector comes to the rescue by analyzing text to determine how likely it is that it was generated by GTP. I entered the text that ChatGPT gave me above and received the analysis below:

Analysis of text above shows a red bar with probability diagnostics for "real" and "fake" on either side. Probability that text is "real" is .02%, while the probablity that it's "fake" 99.98%. Text above the red bar reads, "Prediction based on 335 tokens."

The tool listed it as 99.98 percent fake. I then submitted a few paragraphs from an article that I wrote and was given the opposite response: it found my work to be 99.9 percent real. (Hence, you can feel safe that I’m a real person, or can you?)

The future

While the immediate concern with ChatGPT is student cheating, it raises far bigger issues about how education will need to transform itself in the future. This change will require illuminating and rethinking some hidden assumptions. Note, for instance, that the GPT-2 Output Detector uses “real” and “fake” to represent human- and computer-generated text, respectively. But how is computer-generated text any less “real” than human-generated text? When Captain Kirk asked the ship’s computer a question and received an answer, did anyone consider the answer “fake?” “Fake” refers to an imitation of something else, and by that token text can’t be fake in itself; it is only fake in how it is used.

But the language demonstrates a deep feeling that computer-generated text itself is somehow illegitimate. Yet we use calculators to do math, and there is no question as to whether the numbers they spit out are legitimate. Similarly, the chat boxes that open on webpages are almost all computer generated.

It will take time for education to get past this feeling of illegitimacy and instead consider appropriate ways for students to use AI. There are actually many ways that students are allowed, even encouraged, to use AI as a tool in their work. They can use Grammarly and spellcheck on their written work to catch errors, and when they do not, we encourage them to do so. A smart student will use a citation tool like RefWorks to create their citations and references; memorizing hundreds of pages of APA rules is like memorizing the URLs of every website in a field rather than learning how to do a Google search.

Incorporating AI into education

Our students will undoubtedly use AI systems like ChatGPT in their future work, and simply forbidding these systems’ use is a Luddite position that undermines students’ preparation for the future. At the same time, the purpose of education is to learn, and the danger of AI is that it replaces needed learning.

Perhaps math is a good model for how education can incorporate AI . Students learn to do addition, subtraction, and the like by hand in elementary school, even though they could be allowed to use calculators instead. We require the manual method because it provides the deeper understanding of math that students will need for higher math in the future.

By contrast, STEM classes in college allow students to use calculators to free up time for the harder math. Similarly, we might allow students to use AI to do research in humanities and social science courses, essentially giving the information-gathering job to machines, so that they can focus on analyzing and evaluating that information.

Some have objected that students cannot verify the quality of the information they are given from an AI machine because that machine does not cite its sources. This is a legitimate concern, but education can treat AI machines like Wikipedia. A Wikipedia page is a good starting point for research as it provides a broad overview of a topic, but students are expected to supplement it with other sources. Likewise, students might be allowed to use AI machines to get a broad overview of a topic but be expected to triangulate and blend the results with other sources to ensure accuracy and depth. As the GPT-2 Output Detector seems to be very good at distinguishing AI-produced text from human-produced text (which raises the interesting question of what exactly it is finding to distinguish the two), instructors can use it to enforce this rule. In fact, ChatGPT calls itself a “research preview” (Stokel-Walker, 2022), which suggests that it considers itself just the first step in research. Beyond this, instructors will probably start weighting analysis and synthesis more than factual accuracy as we are reaching a Star Trek-computer-like world where getting basic information is trivial.

Finally, instructors in research courses should have an open class discussion of the use of AI in education. Explaining that students pay a lot of money for an education, the instructor can ask what rules are best for ensuring students receive that education. I have found that students are quite willing to suggest limitations on themselves when the issue is crafted in this light.

Instructors might also use ChatGPT as a launching pad for a conversation about future issues in AI. For instance, many thousands of people die every year in car accidents that are almost all due to human error. Self-driving cars are on the horizon and can save those lives, but every once in a while, the computer will itself make an error that results in injury or death. Should people be allowed to sue car companies when that happens, even though the computer system saved 100 times as many lives as it lost? Alternatively, imagine a medical device that measures heart rates and administers a shock when it detects a problem. This system proves to be far more reliable than human decisions and saves many lives, but is not perfect. What happens if a doctor overrides it, believing it to be making a mistake, but it turns out that the doctor is wrong and kills the patient? Conversely, what if a doctor refrains from overriding it on grounds that it is more likely to be right than the doctor and the machine kills the patent? Is the doctor responsible for the patient’s death in either case?

These are the sorts of questions that our society will need to answer, and ChatGPT provides an ideal segue into those discussions. We can’t stop the advance of technology, nor should we want to, and as educators we serve our students best by preparing them for the world they will inhabit.

Reference

Stokel-Walker, C. (2022, December 9). AI bot ChatGPT writes smart essays—Should professors worry? Nature. https://www.nature.com/articles/d41586-022-04397-7

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

“At the moment, it’s looking a lot like the end of essays as an assignment for education.”
—Lilian Edwards (as cited in Stokel-Walker, 2022)

The recent emergence of ChatGPT has hit higher education like a lightning bolt, leaving many educators trying to process its implications. ChatGPT is a free website that allows users to type in questions and get a remarkably well-written response that looks like a college essay. These can be either research topics or creative writing; it will even write a poem for you on a topic you give it. Not surprisingly, students are already submitting work created by it for their assignments (Stokel-Walker, 2022).

While prior versions of artificial intelligence tended to produce clunky text with obvious factual or writing errors, including the early version of ChatGPT, the recently released update is generally free of writing errors and other obvious indications that the response is machine written. Consider the reply I received below when I asked it, “What are the ethical issues with genetic engineering?”

Chat GPT replies to the question as follows: 

"Genetic engineering, also known as genetic modification, refers to the process of manipulating an organism's DNA in order to introduce or alter specific traits or characteristics. This technology has the potential to bring about significant benefits, such as the development of new medical treatments and the production of crops that are resistant to pests and diseases. However, it also raises a number of ethical concerns.

One ethical issue with genetic engineering is the potential for unequal access to and distribution of its benefits. If the technology is only available to certain individuals or groups, it could perpetuate or exacerbate existing inequalities.

Another ethical concern is the potential for unintended consequences of genetic modification. For example, genetically modified organisms (GMOs) could have unexpected effects on the environment or other species. There is also the possibility that genetically modified organisms could escape from containment and spread their modified genes to other organisms, which could have long-term and potentially unforseen consequences. 

There are also ethical concerns about the use of genetic engineering for non-therapeutic purposes, such as the creation of genetically modified animals for food or other commercial purposes. Some people argue that these uses of genetic engineering are unethical because they may involve the exploitation or mistreatment of animals.

Finally, there are concerns about the potential for genetic engineering to be used for unethical purposes, such as the creation of genetically modified humans or the use of genetic engineering to enhance certain traits in people. These uses of genetic engineering raise questions about the ethical limits of science and the potential risks and unintended consequences of such interventions.

Overall, the ethical issues surrounding genetic engineering are complex and multifaceted, and they require careful consideration and ongoing dialogue among scientists, policymakers, and the general public.

The writing is polished and the answers accurate, though they are somewhat superficial in that I would expect students to elaborate on each of these points. The system, however, is also designed to allow for a dialogue with the user, who can ask pointed questions about elements of the response to get further information. It would not take long for a student to get a competent college essay from the system.

Reaction

Not surprisingly, the system has caused trepidation among educators who fear it is yet another way for students to cheat. Plagiarism detectors like Turnitin compare submitted work to a database of published and submitted work, but ChatGPT creates the response from scratch, leaving no database against which these tools can check.

GPT-2 Output Detector comes to the rescue by analyzing text to determine how likely it is that it was generated by GTP. I entered the text that ChatGPT gave me above and received the analysis below:

Analysis of text above shows a red bar with probability diagnostics for "real" and "fake" on either side. Probability that text is "real" is .02%, while the probablity that it's "fake" 99.98%. Text above the red bar reads, "Prediction based on 335 tokens."

The tool listed it as 99.98 percent fake. I then submitted a few paragraphs from an article that I wrote and was given the opposite response: it found my work to be 99.9 percent real. (Hence, you can feel safe that I’m a real person, or can you?)

The future

While the immediate concern with ChatGPT is student cheating, it raises far bigger issues about how education will need to transform itself in the future. This change will require illuminating and rethinking some hidden assumptions. Note, for instance, that the GPT-2 Output Detector uses “real” and “fake” to represent human- and computer-generated text, respectively. But how is computer-generated text any less “real” than human-generated text? When Captain Kirk asked the ship’s computer a question and received an answer, did anyone consider the answer “fake?” “Fake” refers to an imitation of something else, and by that token text can’t be fake in itself; it is only fake in how it is used.

But the language demonstrates a deep feeling that computer-generated text itself is somehow illegitimate. Yet we use calculators to do math, and there is no question as to whether the numbers they spit out are legitimate. Similarly, the chat boxes that open on webpages are almost all computer generated.

It will take time for education to get past this feeling of illegitimacy and instead consider appropriate ways for students to use AI. There are actually many ways that students are allowed, even encouraged, to use AI as a tool in their work. They can use Grammarly and spellcheck on their written work to catch errors, and when they do not, we encourage them to do so. A smart student will use a citation tool like RefWorks to create their citations and references; memorizing hundreds of pages of APA rules is like memorizing the URLs of every website in a field rather than learning how to do a Google search.

Incorporating AI into education

Our students will undoubtedly use AI systems like ChatGPT in their future work, and simply forbidding these systems’ use is a Luddite position that undermines students’ preparation for the future. At the same time, the purpose of education is to learn, and the danger of AI is that it replaces needed learning.

Perhaps math is a good model for how education can incorporate AI . Students learn to do addition, subtraction, and the like by hand in elementary school, even though they could be allowed to use calculators instead. We require the manual method because it provides the deeper understanding of math that students will need for higher math in the future.

By contrast, STEM classes in college allow students to use calculators to free up time for the harder math. Similarly, we might allow students to use AI to do research in humanities and social science courses, essentially giving the information-gathering job to machines, so that they can focus on analyzing and evaluating that information.

Some have objected that students cannot verify the quality of the information they are given from an AI machine because that machine does not cite its sources. This is a legitimate concern, but education can treat AI machines like Wikipedia. A Wikipedia page is a good starting point for research as it provides a broad overview of a topic, but students are expected to supplement it with other sources. Likewise, students might be allowed to use AI machines to get a broad overview of a topic but be expected to triangulate and blend the results with other sources to ensure accuracy and depth. As the GPT-2 Output Detector seems to be very good at distinguishing AI-produced text from human-produced text (which raises the interesting question of what exactly it is finding to distinguish the two), instructors can use it to enforce this rule. In fact, ChatGPT calls itself a “research preview” (Stokel-Walker, 2022), which suggests that it considers itself just the first step in research. Beyond this, instructors will probably start weighting analysis and synthesis more than factual accuracy as we are reaching a Star Trek-computer-like world where getting basic information is trivial.

Finally, instructors in research courses should have an open class discussion of the use of AI in education. Explaining that students pay a lot of money for an education, the instructor can ask what rules are best for ensuring students receive that education. I have found that students are quite willing to suggest limitations on themselves when the issue is crafted in this light.

Instructors might also use ChatGPT as a launching pad for a conversation about future issues in AI. For instance, many thousands of people die every year in car accidents that are almost all due to human error. Self-driving cars are on the horizon and can save those lives, but every once in a while, the computer will itself make an error that results in injury or death. Should people be allowed to sue car companies when that happens, even though the computer system saved 100 times as many lives as it lost? Alternatively, imagine a medical device that measures heart rates and administers a shock when it detects a problem. This system proves to be far more reliable than human decisions and saves many lives, but is not perfect. What happens if a doctor overrides it, believing it to be making a mistake, but it turns out that the doctor is wrong and kills the patient? Conversely, what if a doctor refrains from overriding it on grounds that it is more likely to be right than the doctor and the machine kills the patent? Is the doctor responsible for the patient’s death in either case?

These are the sorts of questions that our society will need to answer, and ChatGPT provides an ideal segue into those discussions. We can’t stop the advance of technology, nor should we want to, and as educators we serve our students best by preparing them for the world they will inhabit.

Reference

Stokel-Walker, C. (2022, December 9). AI bot ChatGPT writes smart essays—Should professors worry? Nature. https://www.nature.com/articles/d41586-022-04397-7