The Inevitably De-Generative AI in Classrooms
By Dr Anannya Dasgupta, Director, Centre for Writing & Pedagogy (Krea-CWP)
There is no way to ethically use Generative Artificial Intelligence (AI) in the classroom. Especially classrooms that aim to teach reading, writing and critical thinking. I am referring specifically to AI that reads and writes when prompted. How did I arrive at this clarity in the clutter and noise that says otherwise?

I head the Centre for Writing and Pedagogy at Krea, and I have been teaching for nearly 30 years. Generative AI in the form of ChatGPT and others has been around for a mere 3.5 years, in which time it has acquired a narrative of inevitability and an omnipresence in electronic devices that have anything to do with text. Educators are being expected to cave into the logic of it’s here, everyone is using it, there is no way around, let’s figure out how to use it better.
In the beginning
ChatGPT was launched in November 2022; within a month, there was an article in The Atlantic bemoaning “The Death of the College Essay” because students had already started using it. In early spring 2023, ChatGPT homework began appearing in our classrooms too. With no way to detect ChatGPT use reliably (even now), faculty obviously panicked. We took stock of the situation at CWP, and in April that year we organised a faculty workshop: “Don’t Panic: A Hitch Hiker’s Guide to Chat GPT.” A Computer Science faculty joined us and together we demonstrated the logic of its algorithm, the generic structure of its responses, and its obvious hallucinatory gaffs – ChatGPT (still) makes up false information – to show that with little vigilance and inventive assignments we could outsmart its use.
In the meanwhile, we brought AI generated responses and essays to class for students to review and see for themselves that how they were learning to write and what they were learning to write was (and remains) better. These are writing classes with frequent feedback and revision in the drafting process, which makes it harder for students to persist with AI use. However, even when students stopped using AI in writing classes, they continued with it in other courses. In response, faculty have returned to hand-written exams and other forms of assessment where students have no recourse to AI. So far, so good. Looks handled. But not really.
The rough middle
Over the last two years, we have been noticing a disturbing trend, as have our colleagues in other colleges and not just in India. Students can read sentences, but are struggling to comprehend their meaning. At CWP, we have a pool of scholarly readings gathered over years of teaching writing that students coming to our classrooms now are struggling to comprehend. Other disturbing trends include aggressive publicity of Generative AI as the way to college success or else one falls behind; a push to teach writing prompts to generate better AI responses; and the real heartbreaker – educators are embracing this as inevitable to stay relevant to education. All this, while students cannot write a summary of an assigned reading without AI use. One thing that becomes clear is that those who do use AI successfully for writing tasks already know how to read, write, and think critically, and can tell what kind of prompting will make responses better. They come to AI knowing better. But what about students who have not yet learned to read and comprehend text on their own?
There has been a recent spate of articles about the casual impunity with which students are using AI to get admission and then graduating from their courses using AI. This is happening in universities across the board in the US including the Ivy Leagues. Equally, there are reports on how college students are unable to read longer, complex texts, including novels. A recent MIT study confirmed the compromised learning capacities of students who are using AI. We all know that unused muscles atrophy. As we toy with herding a generation of students into functional literacy, we might spare a thought to: 1) who benefits from functionally literate populations? 2) how many of those advocating for AI by pointing to its fait accompli inevitability learned to read, write, and think with the help of AI?
It’s time for educators to draw a hard line. We saw what happened when big pharma began funding medical research in universities. Since tech companies are directing Generative AI research with the very universities that were punishing AI use by students, we can guess which way this is going. There will be no easy solution to this mess.
In the end, it is all about love
In the recently concluded first-year orientation at Krea, we saw all hands go up in a show-of-hands for Generative AI use to read and write. No surprise there. During the faculty training sessions where we planned and modified the first-year writing course, we anticipated exactly this. Our only hope against Generative AI, we realised, is love. If we can only pass on to our students our love for reading and writing, and draw them into our joy, then they might want it for themselves. Can we teach enjoyment of reading to a ChatGPT-summary raised generation? We gave it a shot during our orientation session: “The Mock Turtle’s Tutorial in Reeling, Writhing and Critical Sinking.” Together we read Lewis Carrol, Ocean Vuong and a Shakespeare sonnet. The students were hooked on unpacking sentences. They wrote puns; they thought about human and butterfly migrations, and teased out how Shakespeare bet immortality on good writing. They were keen to learn.
This term’s common reading in the first-year writing course is “What Happens After A.I. Destroys College Writing?” published less than a month ago. We are going to address AI use in the safe space of our classrooms to give students the opportunity to figure out their place in the big picture of techno-fascism. Like other substance abuse, Generative AI feels inevitable only if we give in to its seductive short-term ease before it comes back to bite us in our classes.




