Editor’s Note: The following is an excerpt from an article originally published by the National Association of Scholars on March 31, 2026. It is crossposted here with permission.
Last Wednesday, First Lady Melania Trump appeared at a White House summit on education technology alongside a humanoid, artificial intelligence (AI)-powered robot named Figure 3. Figure, the company that makes the robot, states its goal is “bringing a general purpose humanoid to life.” It has already done so. Figure 3 was designed to be an everyday general-purpose robot that helps around the house, serves food, does chores, and more. Now, according to our First Lady, we should imagine a future where our children are educated by robot philosophers like Figure 3. At the summit, both Figure 3 and Melania “extolled the virtues of further integrating robots into the educational and social lives of children,” after which the robot addressed the crowd in various languages before teetering away. The First Lady went on to say,
Imagine a humanoid educator named ‘Plato.’ Access to the classical studies is now instantaneous—literature, science, art, philosophy, mathematics, and history. Humanity’s entire corpus of information is available in the comfort of your home.
Plato will provide a personalized experience, adaptive to the needs of each student. Plato is always patient, and always available. Predictably, our children will develop deeper critical thinking and independent reasoning abilities. The AI-powered Plato will boost analytical skills and problem solving and adapt in real time to a student’s pace, prior knowledge, and even emotional state.
The Terminator-esque nature of the First Lady’s speech with a robot looming eerily over her shoulder sent a chill down my spine, and to say it was not well-received by others is an understatement.
With the advent of an apparently exciting new era in education—sarcasm heavily implied—it is now, more than ever, the time to explore why AI cannot and should not replace real-life educators and classroom instruction, and why embracing AI harms academia and society as a whole.
First, AI is no substitute for person-to-person socialization, whether peer-to-peer or educator-to-student. Human connection and relationships are the very fabric of society, and it takes skill to navigate such connections. Jared Gould and Lilla Nóra Kiss highlight an alarming trend in their latest article at Minding the Campus: AI is currently being used by one in five high schoolers for emotional and even romantic engagement. While not created to fill the demand for socialization and companionship—a demand which has skyrocketed since the COVID-19 pandemic—customizable AI companions are growing in popularity. Gould and Kiss say this is bad news because,
Normalizing AI companionship as a substitute for human relationships risks a slow erosion of the social fabric itself. Relationships are the foundation of families, communities, and civic life—and when those bonds weaken, the consequences are tangible for both the individual and society.
Every hour spent in frictionless AI companionship, for instance, is an hour not spent building the skills real relationships demand. Over time, this tradeoff compounds, working against the very grain of human development. Scaled across a generation, the consequences may ultimately show up on an economic level (i.e., a society producing less and reproducing less.)
K-12 schools, as well as colleges and universities, are uniquely positioned to counteract this trend. Such institutions “bring together large numbers of young people with shared interests and life stages”—an environment perfectly suited to fostering social formation under the guidance of educators and professors through instruction and mentorship. That is, should they choose not to exacerbate the problem by utilizing more AI capabilities in the classroom.
Second, to build upon the first point, how humanity learns, teaches, and operates in society is vastly different from AI, as it lacks agency at its core. Human knowledge “arises from the embodied and reflected encounters of perception, memory, and value,” Joe Nalven aptly states in a Minding the Campus article. To know something, we must integrate it into a coherent worldview in which facts are bound to meaning and action. Compare this to AI’s “knowledge,” which is “a disembodied aggregation of patterns rather than presence, of probability rather than conviction.” Currently, AI’s ability to reason is limited by the information humans teach it. AI is informed by patterns and reasons from those very statistical patterns. It is this very difference that should give anyone considering an AI educator pause. Critical thinking is a human experience, as genuine thought is not just semantics and grammatical fluency. A City Journal article succinctly explains AI’s pitfalls: “the technology still makes frequent mistakes, carries its own ideological baggage, and fails to converge on consistent results when different models tackle the same question—unless heavily steered by the very humans whose foibles we hope to escape.”
Unless we decide to grant AI agency to somewhat make up for the shortfalls in “thought” it currently experiences—a very bad idea because “a coherent system has interests, and it can resist correction”—a human educator will always have a sense of coherence and feel unity of thought to impart upon students that AI does not.
Third, classroom technology use is associated with poor learning outcomes. In 2024 alone, American education spent a staggering $30 billion on education technology. However, the amount invested does not appear to correlate with increased student success. The Economist points to the rise of in-class technology as a possible reason for the alarming decline in reading and other subjects. From 1994 to 2012-15, scores on 21 nationwide benchmark exams rose, but when screen use became popular in the 2012-15 era, scores began to sink, and have continued to do so to the current day. Students who use little to no classroom technology typically score highest. Technology exacerbates distractibility in students, weakening their ability to form meaningful connections in school subjects and with others. Technology also emphasizes gamification at education’s expense: “although students may improve through repetition ‘within the game’, they struggle to transfer knowledge to other contexts such as standardized tests.”
But surely an AI educator trained on repetition and patterns could teach students to think critically and reason, right?
All of this to say, AI is here to stay. So what uses can it have in academia?
As a tongue-in-cheek idea, instead of replacing educators and professors with AI robots, we ought to give administrator roles to robots. Administrators have been a plague upon K-12 education and our universities for many years. The upfront cost of a robot is certainly cheaper in the long run than continuing to fund administrative salaries, salaries that add to already bloated operating costs and, in turn, increase college tuition rates.
Imagining a day in class with “Plato” or any other AI-powered humanoid instructor once seemed like something out of a movie. Now it could be reality if we do not continue to fight for academia to right itself. If we want to see a future where our humanity remains intact, academia must be a bulwark against the trends that AI is already perpetrating, otherwise it seems the only thing movies like I, Robot got wrong was timing.
Donate Today
Will you help us continue our work to reform American higher education?





Leave a Reply