
About three years ago, I began to notice that the grammar and spelling of my first-year students was improving significantly. “Oh good,” I said to myself, “they’re finally listening to my advice – always do a spell check before submitting your essay, and if you’re not sure about the correct usage, Grammarly is a handy tool.” Something of a technological dinosaur, I was unaware that a software tool that had started out as just a corrective grammar checker had metamorphosed into a full-scale AI-powered writing assistant. And I’d only heard vague rumors about vast datasets known as LLM and a new thing called ChatGPT that was rapidly being adopted worldwide and heralded as the first big step toward “generative artificial intelligence.”
My employer, Arizona State University, which prides itself on being consistently ranked “No. 1 in the US for Innovation,” soon set me straight. In January 2024, it became the first higher education institution to actively collaborate with OpenAI, the creators of ChatGPT. We were given access to the latest version, which was not yet available to the general public. And, most importantly, every time we logged in there was a little disclaimer at the bottom of the page: “ChatGPT can make mistakes. OpenAI does not use Arizona State University workspace data to train its models.” Furthermore, a decision was quickly made by senior management that there was no need to pretend that LLMs did not exist or that students would not use them as shortcuts, so the best approach was to include in our courses learning about their capacities and shortcomings. I started having a lot of fun having students create AI-generated projects and then critique them in class.
I also provided some tutorials on the pitfalls of using ChatGPT as a search tool. My first attempt at doing this myself was instructive. I was working on a paper about the representation of extractive industries in F Scott Fitzgerald’s fiction (“The Diamond as Big as the Ritz” and Dust in The Great Gatsby) and I came across a reference in the correspondence between Scott and Zelda to the magic fire music in Wagner’s Ring cycle – which George Bernard Shaw had interpreted as an allegory of 19th-century extractive capitalism. Wondering if the Fitzgeralds had heard it on their gramophone, I asked Chat when the first recording was made. Initially mistaking the magic fire music for Ride of the Valkyries, I was told that a performance by the Leipzig Gewandhaus Orchestra conducted by Richard Strauss in February 1889 had been recorded on wax discs. Wow, I thought. Strauss conducting Wagner! I never knew that – better do a quick check. And of course, despite the accuracy, it turned out to be a total hallucination.
That said, LLMs are useful research tools – when I was finishing my last book, a global history of the garden in culture, art and literature, I wanted to learn about the history of the wheelchair. Using Claude AI as a glorified Google, I collected 300 references within seconds and a reasonable number of them were genuine. But if you do this, you should always check that the result is not a hallucination, I still have to stress my students.
A lesson, it seems, that one-time university professor turned right-wing cheerleader doesn’t seem to have heeded. By relying on ChatGPT for his new book, Reform Party “intellectual guru” Matt Goodwin – or MattGPT as he will sometimes be known – has destroyed any credibility he still had after losing the Gorton and Denton by-elections. This was just the most egregious example of the unsanctioned use of AI to come to light in recent days. Publisher Hachette has pulled author Mia Ballard’s horror novel Shy girl because it was revealed to be 78 percent AI-generated (JG Ballard, the great literary prophet of our tech-saturated state, will be smiling wryly in his grave at the undoing of his new namesake). AND THE The Atlantic has published an article about how “artificial intelligence seems to be appearing, undetected, on the opinion pages of major news outlets,” citing an example from the widely read Modern Love column in all countries, New York Times. Meanwhile, in schools and campuses around the world, faculty are grappling with the question of what to do with student use.
At every level, from school to postgraduate, students are having LLMs do their thinking and writing for them. This semester I decided to face the problem head on. I have included a rubric with each writing assignment:
Using AI:
Generative artificial intelligence via large language models (LLM) such as ChatGPT and Claude AI is here to stay. He provides valuable tools for both research and writing, BUT his research is often unreliable, riddled with “hallucinations”, which means that everything he says must be checked by primary sources, and his writing, while useful as an editing tool, should never replace your own words – especially in a Humanities course, where developing the art of writing well and crafting a critical argument are skills. Therefore, FOR EACH OF THE FOUR MAIN ASSIGNMENTS IN THIS COURSE, please include one of the following statements at the end of your assignment. Failure to include such a statement will be penalized in your grade. There are five options:
1. I did not use AI at all in the preparation of this assignment.
2. I used AI for my research, but not for my writing.
3. I drafted my work, then edited it with the help of AI.
4. HE wrote the draft of my paper, then I edited it myself.
5. My work was done entirely by AI.
I’ll be very happy with 1 and 2 (though in the case of 2, make sure to check for hallucinations), quite happy with 3, not so happy with 4, and furious with 5.
The results, I’m happy to report, were very encouraging. Because the students were told the rules in advance, they got the hint to avoid relying too much on an artificial intelligence compared to their own. Statements of AI use were almost always honest, and students who avoided AI were proud of their work. There were also thoughtful requests for clarification: “Does using Grammarly count as AI editing your words?” To which my response was, “Good question – when Grammarly is just a spell and grammar checker, to me it doesn’t qualify as using AI. But if you’re using the more advanced version that does significant rewriting, you should tick the box saying you used AI as an editor.”
Administrators and some faculty—myself included—are thinking that in a future where so many graduate jobs will be taken by or dependent on AI, the pragmatic approach is to work with our students on approaches that use new technology smartly and honestly—just as in a previous generation they had to learn how to use word processors, then Google searches, and Matlab applications. But I feel that the students themselves have revolted. One of my sons, a freshman at Durham, says, “I’m shocked you allow it to be used at all, Dad, given the cost to the planet of all those data centers. I refuse to touch it because I went to college to think for myself. And my friends are the same.” An exciting editorial in the University of Pennsylvania newspaper contradicts with great eloquence:
“In 2024, Penn made history by becoming the first Ivy League school to launch a degree in artificial intelligence. By dropping the systems engineering degree, the University justified the replacement as a program that would ‘fit the needs of the AI-powered 21st century.'” Since then, AI has become intertwined with getting a Penn education in entirely unprecedented—and potentially dangerous—ways. Our AI course offerings have exploded. Penn now offers 10 undergraduate programs, 21 graduate programs, and eight doctoral programs in the field. But as this model accelerates, Penn’s commitment to AI innovation looks less like an enhancement to our learning and more like a detriment to our critical thinking skills. In its relentless support of AI, the University has essentially adopted shortcuts and contracting academic thinking. of open expression that purports to promote…
“The irony is that while Penn pours endless money and energy into advancing AI in its quest to move forward, the University is only hastening its demise. AI cannot coexist with education—it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with the few universities that are driving human thought. leave nowhere to engage in real scholarship.”
The court case in California, in which Meta and YouTube were found liable for causing mental harm through their addictive algorithms, could prove to be the pivot that gives strength to the backlash against social media. If some of my students’ comments are anything to go by, I suspect something similar could happen with AI. My favorite response was: “I’m an artist and I hate AI. Even if I write terribly, I’d rather it be my writing. Every search found was taken from JSTOR.”
(Further reading: Parents also need to take responsibility for online safety)
Content from our partners





