large language models

Changing the AI Narrative: Embracing Defiant Optimism

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes from faculty at these events have ranged from concerned to frustrated, overwhelmed to worried, as well as a sense of grim resignation (to be fair, there were

Read More »
Archives

Get the Latest Updates

Subscribe To Our Weekly Newsletter

Magna Digital Library
wpChatIcon

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes from faculty at these events have ranged from concerned to frustrated, overwhelmed to worried, as well as a sense of grim resignation (to be fair, there were a small number of us excited and eager). In defense of the less-than-positive reactions, faculty had just started coming out the other side of the pandemic when ChatGPT dropped last November. Even as faculty attend workshops and webinars on the latest practical tips for dealing with AI, we also need to be mindful of the prevailing attitudes toward and narratives about generative AI.

In his discussion of AI policymaking, Mark Coeckelbergh (2020, p. 148) notes how the way we define an issue shapes our proposed solutions. Applying this perspective specifically to higher education, I argue that the narratives faculty use to make sense of AI shape the ways they respond to AI in their classes. The narrative that often informs the hand-wringing about how AI will fundamentally change higher education is one based on control, mastery, and containment. It’s a narrative that includes both immediate concerns about how faculty lose control of assessment methods and notions of content mastery in the face of AI and broader anxieties about the need to master and contain AI before it masters and controls us.

In reaction to narratives of AI cynicism, I choose to adopt a perspective of defiant optimism. While I refuse to be pessimistic about AI, I’m also not naive when it comes to the many ethical issues that come with generative AI, particularly large language models (LLMs). To put it bluntly, tech companies created LLMs unethically through multiple systems and economies of extraction and exploitation, including the illicit use of copyrighted material to train their models. We have to acknowledge that LLMs are now part of a long history of scientific and technological advancements that were achieved through unethical means. But that doesn’t mean we should ban LLMs or regulate them to the point of their being nonfunctional. They offer too much potential to increase student learning and expand human creativity. We need to acknowledge the harms caused and commit to more ethical AI policies for the future.

Any narrative rooted in control, mastery, and containment will always be vulnerable to abuse and exploitation; it’s essentially baked into the narrative. We need to change the AI narrative to one based on radically expansive collaboration and inclusivity. Changing the narrative makes it easier to embrace models of AI use that inherently value harm reduction. Moreover, we need a new narrative framework because AI unsettles many of our existing assumptions about intelligence, writing, and creativity. Rather than rush to cram AI into our existing conceptual frameworks, we should explore and create better, more inclusive alternatives.

What does rejecting the control narrative in favor of expansive collaboration and inclusivity look like in higher education?

Don’t police, collaborate

Research increasingly shows that AI detectors are not effective at catching AI-generated work and that English language learners are at an increased risk of being falsely accused of using AI (Edwards, 2023). Furthermore, AI detectors reinforce the worst authoritarian impulses of both higher education and AI. Instead of trying to police AI usage, bring students into the conversation about AI in the classroom: What do they think appropriate and ethical AI usage looks like? Having a communal agreement about what is and isn’t allowed and the penalties of breaking that agreement goes a long way to fostering student buy-in and engagement. It’s not a top-down authoritative decision but one that listens to, respects, and includes students.

Don’t offload, engage

Encourage students to engage with and critique AI, ideally as a conversation partner. Many students have inflated senses of what LLMs can do; they buy into the belief that AI can “cheat” for them and they, the student, won’t have to do much (or any) work beyond copying and pasting.

At the start of the fall 2023 semester, I asked my first-year composition students to task an LLM of their choice with writing their first paper, a literacy narrative, for them. They brought the AI-generated essays to class to compare and discuss. Students quickly noted that while the essays looked good at first glance, the writing was superficial and ultimately didn’t sound like how they write.

After an opening exercise that adjusts students’ expectations about AI, there is plenty of room to showcase the potential uses of AI for students. For example, we often encourage students to study together or write together, and LLMs can be that study buddy or writing partner. AI is a conversation partner that is infinitely patient and willing to explain and re-explain concepts until it clicks for a student. This conversational element works particularly well if students give the AI a detailed persona to adopt when it responds. Faculty could either specify that students provide the AI with a particular persona (e.g., a friendly peer tutor who offers constructive feedback) or let students invent their own bespoke AI personae.

While there is value in teaching students how AI can help them optimize formulaic writing, like resumes, cover letters, proposals, and abstracts, we should also encourage them to get creative and weird with how they engage with AI. I get significantly more intriguing and compelling outputs from an LLM when I write with it rather than try to offload tasks to it. Even asking it to do something like “read with me” or “write with me” produces more interesting output than simply asking AI to “summarize X” or “write Y” and then waiting for it to generate a product for me. When I asked Claude to read a scholarly article with me using a persona loosely based on the Golden Girls character Sophia Petrillo, the responses, while, yes, amusing, also provided a new perspective that helped me think about the article in a new light.

The real creative potential of LLMs lies in these moments of shared meaning-making between humans and artificial intelligence.

Don’t restore, renovate

When AI “breaks” an assignment, don’t immediately rush to try to restore it to what it was in the pre-AI era. The harsh truth is that writing assignments that an LLM could easily hack were probably bad assignments that weren’t necessarily assessing what we thought they were. As the turn toward alternative grading practices, such as ungrading and grading contracts, has made clear, many of our assessment methods haven’t served our students well anyway (Blum, 2020; Stommel, 2023). LLMs provide us with a unique opportunity to re-examine what we want our students to take away from our classes and to think about how AI can support those outcomes and enhance students’ learning rather than detract from it.

Offering an addendum to Audre Lorde’s oft-quoted statement that “the master’s tools will never dismantle the master’s house,” Deborah Kuzawa (2019) observes that “the master’s tools can be used to renovate that house until it is no longer recognizable as the master’s house” (p. 156). While Kuzawa focuses on queer methodologies in the writing classroom, her point can apply more broadly to our present moment and the need to embrace a defiantly optimistic narrative on generative AI. Critical thinking and writing skills will always be core values in a college education, but we can use generative AI to renovate higher education in ways that ultimately will be more engaging and meaningful for a greater number of students.

References

Blum, S. (Ed.). (2020). Ungrading: Why rating students undermines learning (and what to do instead). West Virginia University Press.

Coeckelbergh, M. (2020). AI ethics. MIT Press.

Edwards, B. (2023). Why AI detectors think the US Constitution was written by AI. Ars Technica. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai

Kuzawa, D. (2019). Queer/ing composition, the digital archive of literacy narratives, and ways of knowing. In W. P. Banks, M. B. Cox, & C. Dadas (Eds.), Re/orienting writing studies: Queer methods, queer projects (pp. 150–168). Utah State University Press.

Stommel, J. (2023). Undoing the grade: Why we grade, and how to stop. Hybrid Pedagogy Inc.


Julie McCown, PhD, is an associate professor of English at Southern Utah University. Her current research focuses on generative AI and how to critically and ethically incorporate it into first-year composition.