
A friend of mine is a professor on the front lines of the artificial intelligence (AI) cheating revolution, both in his classroom and as part of the college committee that judges academic misconduct. He’s discovered that a collective approach to detecting AI cheating may be more effective than an individual one. But adopting a collective approach to judging malfeasance undermines the principle that we judge individual wrongdoing. This approach could be dangerous, but we may have no better options.
My friend, like many other teachers and professors, is learning to recognize AI style when he sees it. AIs write and structure their essays in characteristic manners—and teachers, above all those in the humanities, are pretty good at recognizing style. The style of AI, incidentally, is still that of someone writing repetitively and simplistically about a book they haven’t actually read. Assuming little or no grade inflation, it can as yet produce no better than a “C” essay.
Yet, to recognize a style is not necessarily to be able to prove academic misconduct. Students can use AI as an initial prompt and then rewrite—crudely with online thesauruses, sometimes with greater sophistication. If they use AI as a crutch but take the time to camouflage their dependence, it can be difficult to establish beyond a reasonable doubt that a student cheated—even when a professor knows it in his bones.
Or rather, it can be difficult to establish that one student cheated. But it becomes much clearer when you assign one essay prompt—and ten or fifteen essays possess a startling resemblance in structure and argument. The chance that so many different students would independently come up with the same precise way to approach an essay subject is extraordinarily small. You can assemble a collective proof of cheating by examining student essays en masse.
At least you can now. Perhaps next year, somewhat smarter AIs can be asked ways to vary their responses to essay prompts—although it would take a rather formidable AI to know how to ensure its answer isn’t the same for different people in the same class. But if this method of detection does continue to work, it raises interesting corollaries for how to teach.
[RELATED: Embrace the Use of AI in Student Work]
To begin with, it reverses the essay prompt dynamic. Up until now, teachers more often varied the prompts given to their students to make it more difficult for them to cheat. But a collective method of proof means that teachers would do better to assign the same prompt to their students. Only by assigning the same prompt can you catch students using AI by collective proof. Of course, this makes it more likely that students can simply copy one another’s work, but if teachers can detect cheating by AI, they can detect the more old-fashioned kind as well.
More profoundly, collective proof of wrongdoing undermines traditional senses of individual culpability. Disparate impact theory, for example, seeks to punish every disparate consequence for different identity groups, regardless of intent. Judges have slapped down the formal abandonment of individual responsibility and intent and the attempt to replace it with judgement of collective (statistical) disparities. This collective procedure to detect AI may work, but it cuts against the foundations of Western law.
Even if we accept collective proof of AI cheating, that leaves open the basic questions that bear upon collective (statistical) proof in law. What precise tool would one use to establish this collective proof of cheating? What would be its criteria? Could you guarantee the reliability of its operations? Should a college or university, in justice, use such collective proofs if they fail even once?
A preliminary answer might be: “Colleges should determine in advance what collective proofs and measures they will use. They will provide set means—due process—by which students might challenge such collective proofs of academic misconduct. Students admitted to a college will sign an explicit affidavit that they accept that such collective proofs may be used to judge whether they have committed academic misconduct.”
That can’t be a final answer. We might judge that colleges effectively coerce students if and when they require them to accept collective affidavits. We might also suspect that any collective system can be gamed. Students dead-set on cheating are clever in formulating workarounds.
But we must try some means to address AI-assisted cheating. AI poses an unprecedented challenge to academic integrity. We should try any means that seem promising until we come up with something that works.
Follow David Randall on X.
“Image chatGPT libre de droit” by Marketcomlabo on Wikimedia Commons
The idea of collective cheating strikes me as horrifying. Collective guilt too?
This article is very reactive toward AI. I get it that academics are flummoxed by AI. I don’t claim to have complete answers.
I do think students will be wise to learn to use and live with AI, because they are going to be expected to be skilled in this when they graduate.
I’ve also come to the conclusion that students who learn to use AI and then transcend it will be the ones who survive and flourish.
The #1 thing in polls that employers say they want is people who are “critical thinkers.” You know, all the polls that are used to claim that higher ed is worthless. Ask your students if the AI product seems like a critical thinker. And then try to teach them to whip the AI tool into shape. By teaching the AI to give what you need — rather than making it the master.
And people in experts in using AI tell me that staying on top of the creative process is the key to keeping ahead.
So, the kiddos need to learn how to work with AI. And the faculty need to get up to speed on AI, and then teach the kiddos.
Expecting the students not to cheat is a lost cause. Using a revolutionary tool is not going to be cheating in the real world. Got to get used to it!