Earlier this week I posted the first part of a series about how teachers might resist the latest edtech mania—the push for AI—by arming themselves against some of the messages meant to rush them into using AI tools:
In this second part, I cover a few more.
“You have to prepare your students for the future!”
This message packs a punch because it taps into teachers’ hopes and wishes for the children and young adults they teach and love. Then there’s the attendant guilt that can dog us, the feeling that we could always be doing more.
K-12 teachers I interviewed for Schooled saw their mission as more than developing skills and delivering content. All saw a role in helping shape future adults—while understanding those future adults would take many shapes. They were preparing students for college or career, but even this was a means to an end: socially engaged, generative people who fulfill their potential.
They weren’t focused, like so many politicians and business leaders, on the future workforce—because teachers see children as something other than future worker bees. The World Economic Forum (WEF) is one organization eager to push AI into schools because “[i]ntegrating AI into education, through traditional or innovative methods, is key to shaping tomorrow's workforce.”
In high school I took a business elective where a lot of time was spent learning how to operate a keypunch machine. After college graduation, I went to work for an investment bank and consumer product companies, and as you can imagine, not one was interested in my ability to use a then obsolete data entry method.
Technological change is ever faster, which made a question that edtech expert and former teacher Tom Mullaney asked ring in my ears like the theme from 2001: A Space Odyssey, “Which future are we preparing students for?”
Teachers of secondary computer science and librarians and teachers of certain disciplines at the college level may have good reasons to teach students what AI can and cannot do and might do. Humanities teachers may want students to research the topic and its economic, social, and ethical considerations.
But what pretty much all of us are being urged to do goes well beyond that. We are told we must teach our students how to use AI in preparation for a future we cannot predict. As Mullaney puts it, though, we are “teachers, not time travelers.”
In a thoughtful piece, Mullaney points to the absurdity of trying to revamp classes and schools for a rapidly evolving technology that may not meet its hype and may not look much like it does today when today’s students are working tomorrow. But in schools where teachers are being told to “go ahead and try it,” they may expend scarce resources re-tooling curriculum and instruction for a future we cannot know.
The companies that hired me in the 80s and 90s were largely looking for the same skills that the wide range of employers seek today—skills I’d gained from my high school classes, summer jobs, extracurriculars, liberal arts college, and in those firms. Here’s what Indeed, “the #1 job site worldwide,” says gets you hired:
The WEF uses a kind of circular logic to present the value of teaching AI in schools: “AI presents an avenue through which students can improve digital literacy, critical thinking, problem-solving and creativity, preparing learners for future job demands.”
The thing is, we have numerable proven ways students can gain these skills, and AI mania can distract us from them—and from the many costs of AI.
“Students are already using AI and it’s here to stay, so you must teach them how to use it ethically!”
This is the biggie, based on a sense of inevitability. AI is an inexorable Zamboni that will resurface the ice with you unless you grab onto the back and ride.
Is it my responsibility as an English teacher to teach students how to use AI? I already have a basketful of responsibilities—and AI is working against them. My core mission includes teaching students how to (actually) read and (actually) think about and discuss what they read; to (actually) analyze rhetoric and literature; to (actually) conduct (their own) research; and to express (their original) thoughts in writing. As Professor Cate Denial put it, “There is nothing that I ask my students to do in my [history] classes that benefits from being done by generative AI.”
A rising number of students are already using tools such as chatGPT to replace their own reading, thinking, research, and writing—and often with crap. Just one example: a student “produced” for me a literary essay that was not only stiffly written and illogical but included quotes from a text other than the novel being analyzed—which the student didn’t realize because they avoided reading the book, thinking about it, finding evidence from it, and constructing an argument by going straight to chatGPT.
The bad news is that as these tools develop further, plagiarized work will likely get harder to catch. Imagining we can tame this new, evolving, many-headed beast of plagiarism by teaching students to use it well is folly. My student would not have avoided abusing AI because I taught them how to use it to do something else. Cheating is an age-old problem that AI just makes much, much easier. If the goal is teaching academic integrity, perhaps refocusing schools on ethics is a better idea.
This you can’t beat them, so you must join them plea may sound familiar—because it’s what we heard about cell phones. Told it was our job to teach responsible phone use and even to integrate their use into classes, teachers were given an impossible job. A decade later, cell phone bans are sweeping the nation and more are recognizing the distractions of other devices being overused in classrooms.
Educators are working to ensure students aren’t avoiding the cognitive work that builds skills, knowledge, and critical thinking by offloading it to AI. Some have moved to writing essays in class, an imperfect, partial, but necessary step. Robert Talbert, a math professor, restructured assessments to help ensure his students are not just producing answers—which they were doing in mere seconds using generative AI—but learning how to solve problems.
There is one real clincher, though, on the ethics question, or there should be. “Teach them how to use it ethically” is a line that ignores the impact of AI on the planet our students will inherit from us. It must ignore this because—setting much else aside, including the industry’s abuses of copyright and of workers—AI’s environmental abuses alone mean there is no such thing as ethical AI use in most classrooms.
AI consumes huge amounts of water and energy and produces significant emissions, air pollution, and electronic waste. Given this, when schools and entire communities are burning in wildfires and being leveled by hurricanes and swamped by floods fueled by climate change, why on earth should teachers be expected to use it?
Dr. Chuck Pearson details many of the costs of generative AI, concluding:
Stolen intellectual property, abuse and exploitation of labor, and accelerating environmental damage. It should be abundantly clear that there isn’t an ethical case for using generative AI. Period.
The harms that the development of this technology have perpetuated and continue to perpetuate, if they were known fully and discussed freely, would horrify.
As Pearson points out, our technophilic culture leads us away from questioning new technology, routinely and uncritically described as “advancements.” Given the costs of AI we already know about, this is the time for hesitation and questioning, not for haste and technochauvinism.
When a teacher resists jumping on the AI-in-education bandwagon, they are not being timid or out of touch. When they plan their lessons and grade papers without AI, they are not wasting time. When they don’t teach students how to use it, they are not being irresponsible. By focusing not on their students’ ability to use generative AI but on their students’ ability to be generative and thus thrive in a world that can sustain them, they are absolutely thinking about the future.
I’m afraid a Part III is to come because more messages are incoming. “AI will make education more equitable!” and “AI is meant to improve, not replace human teachers!” are just a couple I haven’t gotten to yet. Please let me know if there are others that I have missed.