ethical issues

Archives

Get the Latest Updates

Subscribe To Our Weekly Newsletter

Magna Digital Library
wpChatIcon

ChatGPT set the education world ablaze with discussion of its implications. The initial worry about students using it to write their assignments was alleviated with detection software such as GPT-2 Output Detector, and the discussion quickly turned to how instructors can use it to create course content. The ethical concerns about AI in education have focused exclusively on student use of AI, but institutional use of AI also raises ethical issues that we will need to examine as it becomes more and more prevalent in education.

Duty to disclose

One way that higher education is looking to use AI is to replace instructor-student interactions in online courses. For instance, a session description at an upcoming higher education conference states that AI has already been deployed in high-enrollment online courses, and the session will examine how institutions can use it for tasks “such as instructor grading and moderating, while maintaining the perception of instructor-student interaction” (my emphasis). The use of “perception” here suggests that the intent is to make it appear to the student that they are interacting with a human.

Chatbots are common on websites, often accompanied by the photo of a person, which likely deceives at least some users into thinking that they are interacting with a human. While someone on the institutional website might not object to this deception, as they are often just using it as a shortcut to find information on the site, is the same true of instructional uses? Online education in particular lends itself to AI interactions due to its dependency on asynchronous communication. Learning management systems that host student-instructor communication could use AI to answer questions on the instructor’s behalf.

Any doubts about AI’s potential to mimic humans should be laid to rest by the remarkable case of an AI system that actually deceived a person into helping it get past a CAPTCHA (the website human verification system that asks a question that only a human can answer, such as picking out the photos that contain a stoplight in them). It lied to the person by claiming to be someone with a vision impairment that prevents them from answering the CAPTCHA, and the person then answered the CAPTCHA for the AI system, allowing the AI system to get into the site.

Given the trajectory of AI, there is no reason to believe that it cannot eventually pretty much take the place of an instructor. But education does not like to think that it is in the business of deception; quite the opposite. It is in the business of enlightenment, and deception in the form of plagiarism is one of the greatest sins in academia. Thus, does academia have a duty to disclose its use of AI to students and potential students? It is a common part of law that a home seller must disclose to any potential buyer any facts about the home that might be material to the purchase decision, such as a leaky roof. Similarly, should online degree programs be required to disclose to potential students that they use AI for instructor-student interactions? A student might sign up for such a program under the belief that they would interact with an actual instructor and would have chosen a different program had they known they would actually be interacting with a computer.

Course content

I was recently involved with a university that contracted with an outside company to produce online course content. On receiving it, many people were shocked to learn that the content was almost entirely created by AI. The widespread feeling was that the institution was ripped off, even though nothing in the contract precluded it.

But AI developed course content is now a hot topic, with a number of articles emerging that teach instructors how to use AI to develop lessons, assessments, and other content. But if an institution believes itself to be cheated by being sold AI-produced course content, can it justify selling students such content that its own instructors or instructional developers have produced? What is the morally relevant distinction between the two? If a student learns that AI created the content in their online course, do they have a right to object in the same way the institution did?

Confounding the dilemma is that an institutional marketing department would likely oppose broadcasting to potential students that AI produces much of the teaching content as doing so would put the school at a competitive disadvantage against institutions marketing their content as human-created. Thus, institutions have a strong self-interested incentive to not disclose the use of AI.

At the same time, course developers use a variety of outside sources in the development of their courses, including open educational resources and YouTube videos. Is this so different from using AI to do the same? I recently had the experience of seeing a physician’s assistant about a knee issue. She looked up my ailment online in the exam room and showed me the results. While I was inclined to think, I could have done that, doctors tell me that this is exactly what they are doing when they excuse themselves for a few minutes during a consult. Perhaps this means that our “ick” response is simply due to the newness of the technology and will diminish over time as AI becomes more ingrained into different aspects of our lives.

A vision for the future

One distinction that might help guide institutions in navigating the ethical issues with AI is between course content and student-instructor interactions. When it comes right down to it, institutions are not selling their course content; students can find pretty much all the information they receive in a class in books and articles or online for free. What higher education really sells is access to an instructor who can assist students with their questions and guide them through their confusion about topics. Accordingly, institutions might feel free to use AI to assist instructors as they develop course content while exercising caution in replacing student-instructor interactions with AI.

AI can even free up instructors to spend more time on student interactions. Of course, it is not necessarily true that humans are better at answering questions than AI, but it might be the case that, at least for now, a human is better able to diagnose another human’s problems in understanding concepts than a computer. A human might also be more empathetic, and students might just feel more comfortable speaking with a person than with a computer about their struggles.

This distinction between course content and instructor-student interaction might just be a stopgap measure until AI develops to match, if not exceed, human teachers’ ability to help students through their issues. At that point higher education will need to face the possibility of a fully automated university, which could raise more issues. Would students be exposed to a diversity of views in such a situation, and could a university be at the leading edge of thought were it driven by AI that learns from past interactions with people or perhaps even other AI systems? These are the issues that higher education needs to address as AI rapidly develops and instructors more and more use it to replace human activity.