Fight the Fatigue: Why Integration of AI Beats the Alternative

2023.08.28_fight-the-fatigue-and-integrate-AI

We’re in the tornado of AI, hands on our cheeks, breathless, as the stories of impending doom swirl around us. The Wicked Witch of the West pedals by, cackling about the end of days. While we may be powerless to decide whether AI gets its hands on the nuclear codes, we can, in higher ed, figure out how to address the issues facing us in our classrooms. The world is asking us yet again to reinvent ourselves and everything we do. A dichotomy seems to exist as far as how to make these AI-responsive changes, with one side being reactive (identify and respond) and the other proactive (build AI into your teaching—trot it right out front). Read the timeline below that details just one case of suspected student AI use in my first-year writing class, and you’ll see why I’m coming down firmly on the side of proactive measures.

Day 1: Suspicion

I’m in a pile of op-ed drafts, and they’re mostly quite draft-like, with beginnings of ideas, incomplete citations, and general messiness, as they should be. But this student’s draft stands out in its tidiness; its funky, paywall-protected source use; and its general marked difference from all other writing I’ve seen from him all year. Ah, here it is, I think, ChatGPT in the digital flesh. I slap my laptop lid shut and pace around for a while. What do I do now? Do what you know how to do, I tell myself. Talk to the student. Treat it like any old plagiarism case—dozens of which I’ve dealt with over the years—and talk to him, get him to tell you what he did. The truth will OUT! It always has.

Day 2: Preparing for first interaction

I email the student: Can we chat before class tomorrow? I’d like to talk to you about your draft. There’s something about it that doesn’t sound quite like you. I really want to tell him before I tell him. I also want some leg to stand on if I decide in the moment to be bold and let him know that I am concerned, specifically, about his use of AI, so I write to my colleague: 

What is the name of that AI detection site you talked about? He answers: GPTZero. Ah, yes! I paste the student draft into this program, yielding a response like, “This text MAY have SOME portion generated by AI.” Gee, thanks. I now have almost no verification to validate my suspicion to the student when I talk to him. Proof! Where’s the proof? I have none—only my trained eye and sensitive gut, neither of which will mean anything to this student.

Day 3: Interaction with student

Class ends, the room empties out, and the student makes a big point of pulling a chair up to the instructor table at the front of the room. He’s ready. Seated behind this table, I turn my laptop halfway around so we both have eyes on the draft, but before I can say anything, he says, “Is it this word right here (indispensable)? Is that what doesn’t sound like me?” (Um, yes, I’m thinking. Among other things.) “Oh, hmm,” I stammer, fixing my eyes on it in the draft. “Maybe? But it was really the overall style of the writing that just sounded quite different from anything you’d written all year.” He isn’t going to let me get out ahead of him, so he plunges forward: “Well, I do use a program called Querium to help me with things like vocabulary and transitional words.” There it is, I think. The admission! The truth! I can work with this. We have a discussion about how he uses this program, and he insists that at least 80 percent of the draft came from his brain, with only some editing-level help coming in from the bot. I tell him that I believe him, but that he has to be sure to include proper citations when the next draft comes in.

Days 4–5: Waiting for next draft

I’m thinking: He used AI for more than 20 percent of that paper. Plain as the nose on my face. And he knows I know! He will revise appropriately.

He’s (maybe) thinking: She believed me!

Day 6: Next draft comes in . . .

THE SAME. How much impact did our conversation have on this student’s next steps? I ask my internal screening tool. Result: 0 percent.

Days 7–10: The white knuckle

I painstakingly compose (all by myself) an email response to the student’s unchanged draft. I can’t prove the use of AI, and this student knows it. I have to grade the piece As Is, with (only) the parameters of the assignment as a metric. So I go on and on, talking about how an opinion piece should have a significant voice in it and how his writing has NONE, and therefore the grade is such and such. I run my response by my colleague, asking, Is this passive aggressive? (Which I really want it to be, because I WANT the student to know I know.)

Days 11 to infinity: Course correction

I have a stark realization that this is an unsustainable practice and not at all scalable to the masses of students in my first-year writing and other classes, who I’m assured will be using ChatGPT and the like in the very near future if not already. In the pie chart depicting how much energy I devoted to this ONE student in dealing with this ONE draft as compared to the energy I devoted in feedback to all the other students, I’m way beyond disproportionate. 

This is not to mention the fact that the dynamic that arose between this student and myself after this interaction is not one I’m looking to replicate. He was trying to get away with something, and I was trying to catch him. Words like “trust” and “community” and “confidence” reflect the thing that I’m trying to build with these first-year writers, not a police state where I’m constantly suspicious and endlessly seeking verification of cheating, fraud, and inauthenticity. 

So, this brings me back to the dichotomy of building in reactive versus proactive practices. As we sprint down the last leg of summer, I am readying myself on the proactive side in the best ways I can conceive of at present, which include a few basics, starting with incorporating an AI policy in my syllabus; Boston University’s Faculty of Computing and Data Sciences (2023) provides one such example. Further, I am planning intentionally for slow, scaffolded, and explicitly directed use of AI in both lower- and higher-stakes writing in my classes, focusing on process AT LEAST as much as product, with the goal of retaining the valuable pulp that is extracted in working the writing process to its fullest. 

Will this quasi-embracing of AI mean that I never have to face an overuse of ChatGPT or some other bot? No, but with these forward-facing AI practices in place, I think any conversation with a student I may need to have will promise a much better chance of being productive and based on trust and information rather than on damage control and fear of the unknown.

Reference

Boston University Faculty of Computing & Data Sciences. (2023). Using generative AI in coursework. https://www.bu.edu/cds-faculty/culture-community/conduct/gaia-policy


Jill Giebutowski is an assistant professor of writing at Springfield College, where she also directs the writing program.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

We’re in the tornado of AI, hands on our cheeks, breathless, as the stories of impending doom swirl around us. The Wicked Witch of the West pedals by, cackling about the end of days. While we may be powerless to decide whether AI gets its hands on the nuclear codes, we can, in higher ed, figure out how to address the issues facing us in our classrooms. The world is asking us yet again to reinvent ourselves and everything we do. A dichotomy seems to exist as far as how to make these AI-responsive changes, with one side being reactive (identify and respond) and the other proactive (build AI into your teaching—trot it right out front). Read the timeline below that details just one case of suspected student AI use in my first-year writing class, and you’ll see why I’m coming down firmly on the side of proactive measures.

Day 1: Suspicion

I’m in a pile of op-ed drafts, and they’re mostly quite draft-like, with beginnings of ideas, incomplete citations, and general messiness, as they should be. But this student’s draft stands out in its tidiness; its funky, paywall-protected source use; and its general marked difference from all other writing I’ve seen from him all year. Ah, here it is, I think, ChatGPT in the digital flesh. I slap my laptop lid shut and pace around for a while. What do I do now? Do what you know how to do, I tell myself. Talk to the student. Treat it like any old plagiarism case—dozens of which I’ve dealt with over the years—and talk to him, get him to tell you what he did. The truth will OUT! It always has.

Day 2: Preparing for first interaction

I email the student: Can we chat before class tomorrow? I’d like to talk to you about your draft. There’s something about it that doesn’t sound quite like you. I really want to tell him before I tell him. I also want some leg to stand on if I decide in the moment to be bold and let him know that I am concerned, specifically, about his use of AI, so I write to my colleague: 

What is the name of that AI detection site you talked about? He answers: GPTZero. Ah, yes! I paste the student draft into this program, yielding a response like, “This text MAY have SOME portion generated by AI.” Gee, thanks. I now have almost no verification to validate my suspicion to the student when I talk to him. Proof! Where’s the proof? I have none—only my trained eye and sensitive gut, neither of which will mean anything to this student.

Day 3: Interaction with student

Class ends, the room empties out, and the student makes a big point of pulling a chair up to the instructor table at the front of the room. He’s ready. Seated behind this table, I turn my laptop halfway around so we both have eyes on the draft, but before I can say anything, he says, “Is it this word right here (indispensable)? Is that what doesn’t sound like me?” (Um, yes, I’m thinking. Among other things.) “Oh, hmm,” I stammer, fixing my eyes on it in the draft. “Maybe? But it was really the overall style of the writing that just sounded quite different from anything you’d written all year.” He isn’t going to let me get out ahead of him, so he plunges forward: “Well, I do use a program called Querium to help me with things like vocabulary and transitional words.” There it is, I think. The admission! The truth! I can work with this. We have a discussion about how he uses this program, and he insists that at least 80 percent of the draft came from his brain, with only some editing-level help coming in from the bot. I tell him that I believe him, but that he has to be sure to include proper citations when the next draft comes in.

Days 4–5: Waiting for next draft

I’m thinking: He used AI for more than 20 percent of that paper. Plain as the nose on my face. And he knows I know! He will revise appropriately.

He’s (maybe) thinking: She believed me!

Day 6: Next draft comes in . . .

THE SAME. How much impact did our conversation have on this student’s next steps? I ask my internal screening tool. Result: 0 percent.

Days 7–10: The white knuckle

I painstakingly compose (all by myself) an email response to the student’s unchanged draft. I can’t prove the use of AI, and this student knows it. I have to grade the piece As Is, with (only) the parameters of the assignment as a metric. So I go on and on, talking about how an opinion piece should have a significant voice in it and how his writing has NONE, and therefore the grade is such and such. I run my response by my colleague, asking, Is this passive aggressive? (Which I really want it to be, because I WANT the student to know I know.)

Days 11 to infinity: Course correction

I have a stark realization that this is an unsustainable practice and not at all scalable to the masses of students in my first-year writing and other classes, who I’m assured will be using ChatGPT and the like in the very near future if not already. In the pie chart depicting how much energy I devoted to this ONE student in dealing with this ONE draft as compared to the energy I devoted in feedback to all the other students, I’m way beyond disproportionate. 

This is not to mention the fact that the dynamic that arose between this student and myself after this interaction is not one I’m looking to replicate. He was trying to get away with something, and I was trying to catch him. Words like “trust” and “community” and “confidence” reflect the thing that I’m trying to build with these first-year writers, not a police state where I’m constantly suspicious and endlessly seeking verification of cheating, fraud, and inauthenticity. 

So, this brings me back to the dichotomy of building in reactive versus proactive practices. As we sprint down the last leg of summer, I am readying myself on the proactive side in the best ways I can conceive of at present, which include a few basics, starting with incorporating an AI policy in my syllabus; Boston University’s Faculty of Computing and Data Sciences (2023) provides one such example. Further, I am planning intentionally for slow, scaffolded, and explicitly directed use of AI in both lower- and higher-stakes writing in my classes, focusing on process AT LEAST as much as product, with the goal of retaining the valuable pulp that is extracted in working the writing process to its fullest. 

Will this quasi-embracing of AI mean that I never have to face an overuse of ChatGPT or some other bot? No, but with these forward-facing AI practices in place, I think any conversation with a student I may need to have will promise a much better chance of being productive and based on trust and information rather than on damage control and fear of the unknown.

Reference

Boston University Faculty of Computing & Data Sciences. (2023). Using generative AI in coursework. https://www.bu.edu/cds-faculty/culture-community/conduct/gaia-policy


Jill Giebutowski is an assistant professor of writing at Springfield College, where she also directs the writing program.