22 Million Student Essays Show Signs of AI Generation, and Professors Aren’t Helping Curb the Trend

When ChatGPT was first released during my freshman year of college at Emory Univeristy, I thought my peers would avoid it for fear of being accused of plagiarism, especially given my school’s strict policy against it. I was wrong. Faced with the challenge of balancing stressful courses, demanding extracurriculars, and, for some, part-time jobs, “Just use ChatGPT!” became a common phrase heard around campus.

I distinctly remember peer-reviewing a classmate’s essay last year. The awkward vocabulary and disjointed paragraphs immediately revealed that it wasn’t his work. Though I kept my suspicions to myself, I offered feedback as best I could. Later, the professor uncovered the truth and confronted the student. When I later inquired about the exchange, he candidly admitted that he planned to continue using artificial intelligence (AI) for his assignments.

While AI can be useful for explaining difficult concepts and studying for exams, many college students use it unethically: to write papers, complete assignments, or cheat on exams. As of late March 2024, Turnitin reported that out of more than 200 million papers reviewed, over 22 million showed signs of being at least 20 percent AI-generated, while more than 6 million appeared to be 80 percent or more AI-written, according to Campus Technology. Even though colleges and universities may classify AI-generated work as violating their student conduct policies, there is no foolproof way to prevent its use. The constant advancement of AI technology makes detection more challenging, and in some cases, AI detection tools have misidentified a student’s original work as AI-generated.

Simply put, AI is unpredictable. As popular platforms like ChatGPT continue to evolve, more students pass off AI-generated work as their own, further normalizing academic dishonesty. Minding the Campus contributor Liza Libes documented how dependency on AI is already evident in tutoring sessions—even in front of their own instructors:

AI tools have virtually eliminated the necessity for students to learn anything about grammar, sentence structure, and the eloquent expression of ideas. Every single one of my students has Grammarly preloaded on their browsers, a tool that cleans up spelling and grammar from tasks as menial as writing emails to those as monumental as drafting the college essay. Students use ChatGPT to obtain summaries of longer texts for English classes and tools such as Studocu to generate AI notes on any topic imaginable. Years ago, I could ask a student to write a paragraph in front of me during a meeting, and within moments, we would have a group of sentences in front of us. Now, every student asks for time to ‘think about it for homework’—most likely because they do not want me to know they will not be the ones doing the thinking.

The reliance on AI, however, isn’t solely a student issue. If we’re genuinely committed to moving away from AI and fostering more authentic learning, turning to faculty for a solution would be a mistake.

Many professors now use AI to craft lesson plans, generate lectures, and grade assignments. This has created a strange double standard across campuses: When students use AI, it’s considered a violation of policy, yet when faculty do, it’s viewed as a time-saving efficiency.

As AI becomes more embedded in both student and faculty workflows, the distinction between genuine learning and mere task completion becomes increasingly blurred. If professors—entrusted with shaping the next generation—rely on AI for lesson planning, grading, and feedback, how can we expect students to develop critical thinking and independent skills? When both students and educators turn to AI to avoid effort and avoid meaningful engagement, the very essence of education—learning for its own sake—begins to erode. With the normalization of AI-driven shortcuts, it’s not just the value of a degree that’s at risk; it’s the integrity of the educational experience itself.


Image: “ChatGPT Artificial Intelligence” by Focal Foto on Flickr

Author

  • Alyza is a junior at Emory University in Atlanta, GA, studying Economics and Spanish. Having witnessed the effects of “woke” culture and political correctness on campus, she is deeply concerned about the extent to which students' free speech remains unprotected. Previously an intern for Speech First, Alyza hopes to leverage her experience to raise awareness about institutional censorship and the indoctrination of young adults in higher education as a writing intern for Minding The Campus (MTC). Connect with her on LinkedIn at www.linkedin.com/in/alyza-harris-67b865202.

    View all posts

One thought on “22 Million Student Essays Show Signs of AI Generation, and Professors Aren’t Helping Curb the Trend”

  1. Perfect closed loop. Students use AI to write papers. Profs use AI to grade papers. And final grades are machine calculated, as traditional.
    Perfect closed educational system. Keeps rambunctious 18-22 safely out of the way and bribes Pointy Heads to avoid making trouble.

Leave a Reply

Your email address will not be published. Required fields are marked *