What ChatGPT Is Doing to Student Writing

Today’s students can write a perfect sentence that says absolutely nothing.

I’ve been editing college application essays for about 10 years now. Every year, I encounter a broad array of writing abilities, ranging from high school seniors who submit fifth-grade-level essays to seventeen-year-olds who write better than their teachers. The fact is that “smart” kids have always been the minority, and despite never-ending claims that our students are getting dumber every year, I have nonetheless caught glimpses of the next generation’s most brilliant minds in every graduating high school class. These are the students who, whether matriculating into Ivy League schools or more modest universities, will go on to shape our society through medical discoveries, technological advancements, and bold ideas. They may approach problems in unexpected ways, but they have never been significantly duller than their older counterparts.

Not until this past year, however.

The graduating class of 2026 is the first cohort of students who have gone through high school with ready access to artificial intelligence (AI); as a result, they have never had to write an entire essay from scratch or map out a cohesive argument on their own. And while the return of the Blue Book exam structure has somewhat restored a baseline of individual accountability, it is now virtually impossible, with Large Language Models (LLMs) such as ChatGPT and Claude at students’ fingertips, to simulate the experience of having to compose original argumentative essays from the ground up. The result is a sharp decline not only in student writing skills but also in the general capacity for critical thinking, as manifested in the cohesive formulation of written ideas.

But the most curious component of the AI phenomenon is that the approximate distribution of writing ability itself has not changed much over the past several years—there are still the same number of bad writers, mediocre writers, and good writers in every graduating senior class—at least when it comes to command of grammar and syntax. What’s changed, instead, is the prevalence of students who possess a high degree of technical writing fluency yet a low level of intellectual competence, resulting in a greater number of students who can produce perfectly structured sentences that say absolutely nothing.

How is that possible?

It’s simple: The same number of students with a natural aptitude for writing will still learn how to write, but they will no longer learn how to write well. Where previous generations learned to write from books, newspaper articles, and other written materials, the latest generation of students will be most influenced by their new primary source of information: LLMs.

In other words, because students now use ChatGPT and other AI tools to outline essays, skim readings, and solve homework problems—to perform nearly every assigned task—the majority of the writing they encounter will be AI-generated.

The problem is not just quantity, but quality: ChatGPT and other LLMs produce language that often says very little.

Here is an example of a paragraph it generated when I asked it to predict the next section of my essay:

What this reveals, more than anything, is that we have mistaken fluency for thought. A student who can produce a clean, grammatically sound paragraph—complete with varied sentence structure and the occasional well-placed em dash—now gives the impression of intelligence without having engaged in the difficult, often uncomfortable labor of actually forming an idea. But writing, in its truest sense, has never been about polish; it has been about resistance. It is the act of pushing against one’s own vagueness, of confronting half-formed intuitions and forcing them into clarity.

Read the first two sentences. What can you deduce from ChatGPT’s argument? We learn that a) students have mistaken “fluency” for “thought” and that b) students can now write clean sentences without having gone through the “labor of actually forming an idea.” There is truth to both claims, but the first claim is too broad to communicate anything substantial. The second claim simply repeats an idea I’ve already established without deepening it in any meaningful way. ChatGPT is a pro at regurgitating surface-level ideas without actually saying anything of substance.

Its next two claims, however, are the most egregious offenders. ChatGPT goes on to tell us that writing is not about “polish” but “resistance.” Garbage political bias aside, it is bad enough that this statement by itself means absolutely nothing—what follows is somehow even worse: an explanation that wastes an entire sentence on buzzwords and does little to elucidate the meaning of this so-called “writing as resistance.”

All of this is to say that ChatGPT likes to spew nonsense. 

So what happens when students read this bad writing, and only this bad writing, daily?

For one, their own writing begins to resemble ChatGPT’s circumlocutious prose. This year, for instance, I’ve received an overwhelming number of student essays that feature the formulation “It’s not just this, it’s that”—one of ChatGPT’s signature writing moments that appears in the sample paragraph above. While many of these students admit to using AI in their writing, some vehemently insist that their writing is their own—even if their essays sound almost wholly AI-generated.

It might be tempting to assume that these students are simply lying through their teeth. But what is most remarkable is that when asked to produce their own writing on the spot, many of these students will recreate ChatGPT-sounding sentences without resorting to their AI sidekicks.

What this means is that students are beginning to write exactly like AI.

You are what you read, after all, and AI writing is the only writing that they have ever known.

As a result, an increasing number of students begin to sound like one another, and an increasing number of student writings become indistinguishable from robot prose.

But is this the end of critical thinking in our society?

Not necessarily. After all, the rise of the electronic calculator in the 1970s convinced an entire generation of pedagogues that students would grow dumber, but the result was simply a shift in intellectual priorities. While the general public is veritably worse at mental math today compared to 50 years ago, the handful of individuals who can multiply large numbers in their heads are now infinitely more valuable in certain fields of our society. Whenever a technology automates a basic function, therefore, the small minority who retain it gains a disproportionate advantage.

I predict that we will soon see the same phenomenon in writing.

After all, few students could really write well before the advent of ChatGPT. With that number now dwindling further, writers are about to become a valuable societal commodity.

As Peter Thiel said in a recent interview, the future looks bright for the “word people” because good writers may very well define our future.

Follow Liza Libes on X.

  1. I’m not about to defend ChatGPT, but I need to push back on your interpretation of the latter sentences of its predicted paragraph. I don’t think it was using “resistance” in a political sense, at all, but rather as an analogy to weight-lifting – the claim is that writing requires that you *strain* against something that is holding you back, which as a “word person” feels very correct to me. And what is one straining against, in order to write well? I’d say that “one’s own vagueness [and] half-formed intuitions” is a reasonable description.

    Clearly, ChatGPT could have worded those sentences better to make their meaning clearer, and personally I would have leaned more into the weight-resistance metaphor and returned to it in the last sentence – because that’s my own writing style. But I don’t think it’s correct to dismiss ChatGPT’s output here as “[meaning] absolutely nothing.”

    BTW I realize that my own opening paragraph used a “not this but that” structure twice in a row, but I swear it came out of my own stochastic parrot brain.

  2. If you write perfect sentences which say absolutely nothing, no one will ever be upset with anything which you may have said.

    You thus avoid the risk of getting in trouble. You avoid the risk of offending your teachers, behavior which is rewarded with good grades, and it’s the students with good grades who have a chance of admission to the elite institutions. Where students who avoid offending anyone further rewarded.

    At least half of a high school GPA is how popular the students was with the teachers, grading is that subjective, and I say this as a certified high school teacher who has not only taught in a high school, but heard the conversations in the teachers room.

    In other words, we reside in Danre’s Vestibule of Hell, where indecision and neutrality are rewarded and success comes to those who manage to never, ever take a stand for anything, ever.

    We have far bigger problems than Chat GPT…

    I’ve never been to Columbia, nor do I have any particular desire to ever go there, even before the institution’s quite shameful pandering to Team Hamas. But what struck me were the television interviews of the neutral students, those who were neither supporters of Israel nor of Hamas.

    What struck me was how very careful they were not to say that they understood the concerns and grievances of each side without taking a side. I saw similar things with the Millennials, they knew the approved viewpoints (e.g. guns bad) but they had absolutely no idea why they believed that, only that they wouldn’t get in trouble if they said it.

    I thus come back to where I started, you won’t get in trouble for saying something if you never really say anything. It’s a feature, not a bug…

Leave a Reply

Your email address will not be published. Required fields are marked *