I’ve been writing recently about the many ways teachers are being encouraged to use AI in their work. The torrent of messages is such that I’m now on part three of a series about how educators might resist being swept away by them.
In Part I, I examined claims that AI tools are time-savers and the ad hominem that educators who resist using them are fearful stick-in-the-muds. Part II covered the plea that we must teach students how to use AI to “prepare them for the future.”
Here are more common and problematic messages:
“AI can personalize learning!”
Proponents of AI in education tout its ability to “personalize” learning, a familiar edtech marketing claim. They usually mean what educators call “differentiation,” structuring learning so students in a heterogenous group can be reached where they are. It’s what programs such as Duolingo have done via algorithm.
Sal Khan, founder of nonprofit firm Khan Academy, is one of these proponents. Khan is pushing his new AI-infused product by comparing it to a personal tutor:
Instead of simply providing answers to their questions, Khan says, new AI bots like Khanmigo are trained to serve as “thoughtful” mentors, prodding students with questions, giving them encouragement, and delivering feedback on their mistakes as they work to develop their own understanding.
Khanmigo improves in one way on “previous teaching machines” that were supposed to revolutionize education, writer John Warner points out, and that “is ChatGPT’s ability to generate responsive syntax to student inputs.” He notes that “believers in the power of generative AI will argue that these are sufficient to lift [it] above past attempts.”
But like older “personalized” software, Khanmigo doesn’t personalize—or do much of what the language in the block quote above suggests. Machines can certainly “prod”—for generations, teakettles have been whistling and microwaves beeping at us to act. The word “feedback,” however, is revealing. Like so much corporate jargon, it’s become common terminology in schools. With its origins in tech, it implies a machine loop rather than communication.
Other language is more off-putting, especially “thoughtful” and “mentor,” which imply the sentience and sentiment AI lacks. Frankly, it’s gross to suggest a software program can provide children what a human tutor or teacher can. A machine cannot be a mentor. A machine cannot be thoughtful.
Personalization requires a person.
And children deserve personal care. In The Last Human Job: The Work of Connecting in a Disconnected World, University of Virginia sociologist Allison Pugh reveals how the caring professions—from chaplaincy to teaching to medicine and more—are being warped and degraded by the imposition of tech, including AI.
When the “connective labor” of these professions is intervened by or contracted out to tech, Pugh highlights, it is depersonalized. She defines connective labor as:
the forging of an emotional understanding with another person to create valuable outcomes. While there may be many paths to that understanding, they all seem to require some form of empathic listening…and witnessing, in which that vision is reflected back to the other.
This is at the beating heart of the teacher-student relationship, which research has repeatedly proven to be among the most important factors in student engagement and academic success. The “encouragement” provided by a bouncing avatar or a digital spray of fireworks can be seen as a mockery of the teacher-student relationship with its complex effects on learning.
The best teachers I’ve known focus intensely on relationships with colleagues, parents, and students. They would likely cringe if told to “collaborate” with AI since their superpower is in collaborating with humans. Children need our human attention—especially in this age of social isolation and political trauma.
The push for efficiency in the caring professions, Pugh’s book illustrates, is motivated by profit-seeking in the private sector and austerity in the public. And efficiency is a purpose of AI-powered “personalized” ed software.
John Warner states, “Real-time feedback [which Khanmigo says it provides] is a tool of efficiency, but in what world is learning necessarily efficient?” He goes on:
In fact, when it comes to teaching writing—my area of focus when I taught—real-time feedback would do significant harm. Writing is, by necessity, a slow process of thought and consideration over a piece’s communicative purpose within a rhetorical situation.
In his new book More Than Words: How to Think About Writing in the Age of AI, Warner says these kinds of programs are better described as “depersonalized learning.” They sideline student, teacher, and the student-teacher relationship.
AI tools aren’t meant to replace teachers, boosters will counter, just help them! And yet, from the other side of their mouths, many will say, as does Khan, that more teachers and tutors would be best, but we simply can’t afford them. Which, of course, we can. The federal government spent just $119 billion on K-12 schools in 2022 and Musk, Trump & Co. is slashing that number by the day. Spending nationwide on K-12 ed tech, meanwhile, is expected to grow by 2030 to about $170 billion a year, powered by “personalized” and online learning. These are choices.
When we don’t have sufficient tutors or smaller class sizes for all, that feeds arguments, Pugh says, that AI automation is “better than nothing.” The Last Human Job reveals the likely outcome of this kind of thinking given our systemic inequities: the haves receiving mental and physical healthcare, education, and other services from expert humans and the have-nots getting stuck with inadequate AI proxies.
AI then can be worse than nothing. “The aggregate of [our] small decisions,” Pugh argues, could lead to greater inequity, disconnection, dehumanization.
“AI can democratize education!”
Some boosters insist AI isn’t just better than nothing: It could be inspiringly transformative. As one conservative think-tanker exclaims, “So if the internet democratized access to information, the analogy essentially is AI is democratizing access to expertise.”
But in 2025—as U.S. democracy is being warped at speed—these democratization claims should ring hollow. Just as the internet provided widespread access to disinformation, chatbots can replace the habits democracy requires: questioning, reflecting, empathizing, researching, interrogating sources and their validity and accuracy.
As Dan McQuillan argues in Resisting AI: An Anti-Fascist Approach to Artificial Intelligence, AI is far more likely to “amplify” the logic and effects of austerity and facilitate authoritarianism.
We don’t need to speculate about how the latter works. Global security expert Marie Lamensch summarizes:
Indeed, while surveillance, propaganda and disinformation have always been part of an autocrat’s playbook, several technologies make this repression and control much more pervasive, efficient and subtle….Digital authoritarianism takes many forms; from online harassment, to disinformation, to internet shutdowns, cyberattacks and targeted surveillance using social media, artificial intelligence (AI) and facial recognition software…The fact is that the global democratic decline of the past 15 years has coincided with the rise of new technologies for information-gathering, communication and surveillance.
How this can play out in classrooms is illuminated by examining two new AI classroom tools and the threats they present.
Teacher Peter Greene recently analyzed a puff piece on an ed professor who “is working on an AI tool to give feedback to math teachers.” There’s that word again: feedback. As with students, teachers don’t have enough mentors, but instead of humans, we get AI.
And with it comes an astonishing level of surveillance:
The AI tool uses cameras and audio recordings to report on whether the teacher looked at or walked through each section of the classroom, how often they used group work, and many other techniques. Even the words the teacher and students use are tracked.
The last sentence should chill like ice in a nation where the ruling party is policing language by outlawing specific words and topics in classrooms, banning them in workplaces, and firing workers based on computer searches that turn them up—and maybe asking questions later. (See: Enola Gay.) Where an unelected, unduly-appointed, interest-conflicted, Nazi-saluting trillionaire is burrowing into government systems and extracting data.
Teachers should refuse this and similar tools that could be used to target them and their vulnerable students—and put educators out of their jobs or the profession.
Earlier this year, University of North Carolina law professor—and expert on the WWII Japanese American internment—Eric Muller shared on Bluesky his experience using an AI tool that allows children to converse with an AI phantasm called “Anne Frank.” I beg you to read the thread to see Muller asking a series of questions about the experiences and feelings of “Anne.” As he posted, he “had a helluva time trying to get her to say a bad word about Nazis.”
The bot relentlessly redirected Muller away from ill will “Anne” might have had toward the Nazis directly responsible for and complicit in her terrible fate, ever returning to themes of hope and resilience.
I was struck by how well the bot’s responses and follow-up questions for students are aligned with the right-wing education gag orders passed in so many states in recent years, those “divisive concepts” and “CRT” laws meant to whitewash history and ensure students feel no “discomfort” while learning it. (See Tom Mullaney’s writing for the ethics of having students use these kinds of chatbots.)
With tech oligarchs with ties to Trump beyond Musk embracing far right ideology and conspiracy theories, we should probably expect more AI tools for schools like this horrifying one: ahistorical, emotionally manipulative, and meant to inculcate passivity amid hate, violence, and oppression—instead of resistance.
Let’s say you think I’m being paranoid and alarmist about AI tools. Let’s say you don’t live by my late father’s axiom that “just because you’re paranoid doesn’t mean they’re not out to get you.” Even if you see some benefits to AI, there is no denying who ultimately benefits from teachers and schools finding ways to use and pay for it.
It is these men.
These are the men who have been made obscenely wealthy selling us on tech’s dreams, who stood up at Trump’s inauguration and silently hailed the failure of the American experiment. Our elected and carefully appointed officials are being replaced by the whims of a single one of them, Elon Musk, who became the richest man in the world by selling us on the marvels of technology.
These are the men helping this administration destroy the US Department of Education, which, though it could always have done far more, has provided civil rights enforcement and funding for children from low income households and children with disabilities—in other words, has helped lead the work to actually democratize education.