Image from Parks and Recreation
My style of writing, especially blogs, is heavy on figurative language. Ergo, I’ve made some crazy comparisons in my posts. I’ve compared seeing my best friend after four years to a supervillain boasting how he’s about to get his revenge. I’ve compared my favorite coffee shop closing to my loved ones dying. I’ve compared being informed I needed to write a themed post to a mob boss strangling a squealer to death.
Not today. I’ve been trying to find a point of comparison for today’s subject, and I’ve come up empty. Never in my life have I encountered something so unwanted and yet so forced on the general populace as the so-called “AI boom.”
A long time ago, my dad and I watched a video about the theology of technology. A pastor with a thick Scottish accent explained that technology is as sinful as the people who use it. Whenever something involving vaguely-defined entities like the government or the Internet bothers me, I try to step back and apply Scottish Pastor’s rationale. Is Twitter/X, as in the lines and lines of code that compose the site, bigoted and misinformed, or is that a description of the website’s user base? Are corporations—the multinational business organizations verified as existent by the Better Business Bureau and the Nasdaq—responsible for stagnating wages, seventy percent of greenhouse gas emissions, and ravaging the Global South so that grocery stores have coffee beans year-round? Or should I be blaming the greedy, ruthless solipsists at the top of the corporate pyramid?
Stepping back to think through this article, I asked the same question about AI. Do I have a problem with AI, the 1 and 0-composed replica of a human brain that can generate images, google questions for me and add songs to my Spotify queue? No. Heck, I’ve used AI myself. While writing my novel, I used a character appearance AI to give faces to the characters swirling around in my head. I’ve used Siri for as long as I’ve had Apple devices. I’ve used friends’ Alexas and even messed around with ChatGPT.
But that’s the thing: I use AI because I want to, not because every technological platform I use and many that I don’t keep poking me saying, “Want some AI, bro? Huh, huh? Try this AI. bro. We know you want to, bro. Come on, bro, the Internet doesn’t have enough photos of people with twelve fingers, bro.”
The AI push isn’t the first time tech and social media companies have tried to push an unpopular feature on their user base. In 2013, Google tried and failed to keep Google+ afloat by tying it to users’ YouTube accounts, requiring YouTube users to have a Google+ account to leave comments. Two years of backlash later, they discontinued that requirement. Apple tanked the reputation of U2’s Songs of Innocence by forcibly downloading the album into Apple users’ music libraries on its release day. Entire books have been written about all the unnecessary changes Elon Musk has made since he took over Twitter. The difference between those instances and the AI push is that one company was behind the unpopular features in those cases. Now, it feels like every tech company and social media platform is trying to force AI’s popularity through sheer overexposure.
In the name of venting my frustration about the AI pill Zuckerberg, Musk and their ilk keep trying to force-feed me, here’s a comprehensive list of every instance I could find of AI actively making the world worse.
Education
All kinds of articles and studies show that AI is affecting students’ critical thinking at every level of education. The brain is a muscle. In the same way that having someone else lift weights for you won’t build your bicep strength, when students pull together essays and test answers with ChatGPT, that is work their brain isn’t doing.
The issues go beyond students “writing” essays with nary an outline or class notes in sight. Tests of AI generation found that 60% of AI-generated responses are plagiarized. A Pew Research Center survey found that one in four teachers believe AI is making education worse, not better. The percentage climbs the higher up in education you go: nineteen percent of elementary school teachers, twenty-four percent of middle school teachers and thirty-five percent of high school teachers. Another survey from Edweek Research Center found that sixty-nine percent of teachers worry what AI will do to students’ mental health. Teachers have also reported concern that AI will take a hatchet to students’ creativity, self-confidence and critical thinking.
With AI comes whole new avenues of cyberbullying and harassment. Only a few years into AI’s availability to the general public, there are already several cases of students using AI to make deepfake pornography of their classmates. AI replications of people’s voices are also a concern, according to an article from The Hill. Diverting briefly from the topic of education, worth mentioning is the case of the French woman conned out of $850,000 by scammers who used an AI replication of Brad Pitt.
Back to education. I don’t need to speak in the hypothetical anymore. I’ve seen firsthand how much of a drain AI can be to education.
Before my school district went on Christmas break, my sixth and seventh graders gave presentations based on the novels they’d been reading. When it was her turn, one of my seventh grade girls came up to the front, got one or two sentences into her presentation, and then admitted to the class she’d “used Google to look up words.” What followed was a painful couple of minutes as she stumbled through a slideshow she’d supposedly put together. She denied any AI usage when confronted by myself and my partner teacher, but the proof was in her inability to read her own presentation. I suspected two sixth-graders had also AI-generated some of their slides, the quality of their writing peaking in one slide and then plummeting in the next, but I never found proof. When I came back from Christmas break, I caught a student opening up ChatGPT to do a writing assignment.
As a wise music critic whose head vaguely resembles a cantaloupe with glasses often says: “NOT GOOD.”
Entertainment
At a fifty-three percent ‘Rotten’ rating and a forty-three percent audience rating, the Marvel Cinematic Universe’s Secret Invasion miniseries is the fifth worst-rated MCU production on Rotten Tomatoes and the third worst-rated MCU TV show. Only Netflix’s misfire of an Iron Fist adaptation and the so-bad-it-killed-virtually-any-respect-or-interest-towards-the-namesake-IP Inhumans series rank below Secret Invasion. Where Secret Invasion differs from its fellow sucking voids of good quality is—you guessed it!—AI. The show received criticism for outsourcing its intro to an AI company.
Later in 2023, the WGA and SAG-AFTRA unions went on a 148-day strike, the longest in the history of Hollywood. Their primary concern? The prospect of studios and networks AI-generating scripts rather than hiring writers. According to WGA board member and comedian Adam Conover of Adam Ruins Everything fame, the strike-ending agreement between the unions and the studios forbade the use of AI-generated scripts or the AI-generated source material. A producer can’t “write” a novel with ChatGPT and then pitch it for adaptation to avoid paying writers or the author of the adapted source material. Yes, it’s good that the problem was basically stopped before it started, but my problem is that this problem exists in the first place.
The attempt to force AI into the creative arts doesn’t stop within the boundaries of Hollywood either. Sarah J. Maas, who’s had a meteoric rise to fame thanks to online literary hubs like BookTok devouring her novels by the pound, found herself in hot water when her publisher admitted to using AI for the UK edition of one of her novels. On the publishing/book production level, AI has steadily wormed its way into the literary landscape, and that’s not a good thing. The Amazon Kindle ebook store has been inundated with low-quality, often plagiarized and nonsensically “written” AI books. Spines, a startup “publishing” company of AI-generated books, told news outlets they wanted to distribute 8,000 books by the end of 2025.
These people don’t care about art. They don’t have an ounce of passion or creativity. They want money, they want it yesterday, and they’ll happily steamroll over the artistic world for a payday.
Let’s wrap up this section by talking about Wrapped. Spotify Wrapped, the annual summary of Spotify users’ listening habits, to be specific. 2024’s Spotify Wrapped sucked. That’s not a personal opinion, by the way: Inc, Vox, Forbes and many other media outlets wrote articles wherein they criticized Spotify’s incorporation of AI into Wrapped, called Spotify’s new direction stale, claimed (truthfully) that they dropped beloved features and replaced them with new features no one wanted, and argued that 2024’s Wrapped is the latest in a long string of puzzling moves Spotify has made in the last few years. Those are the professional opinions. Type ‘Spotify Wrapped AI Reddit’ into Google, and you’ll find dozens of Reddit posts criticizing the nonsensical AI-generated micro-genres that replaced previous years’ listening trackers, the AI-generated podcast where two monotone voices droned their way through users’ year of listening, and the overall hollow slapped together feeling when Spotify Wrapped barely has any human involvement.
Social media
Even though ChatGPT wasn’t around for Silicon Valley to drool over until 2022, AI and bots have been making the average social media user’s experience worse for years. As far back as the early 2010s, Instagram had a major problem with bot accounts. When they handled it (or tried to) back in 2014, celebrities like Justin Bieber, Akon and Kim Kardashian lost millions of “followers” overnight.
Ever heard of the “dead Internet theory”? It’s a theory that claims AI-generated content has long overtaken Internet content made by flesh-and-blood users. It also theorizes that the majority of Internet interactions–likes, shares, interactions in comment sections–are often bots talking to other bots. While there’s not much empirical evidence to prove the dead Internet theory true, much like studios agreeing not to use AI-generated scripts or source material, the fact that we can even consider the dead Internet theory possibly true brings the irrationally angry part of my brain online.
Did I say there’s not much evidence for dead Internet theory? I lied! And you have a certain billionaire whose name rhymes with Feelon Nusk to thank. The New York Post reported on a network of AI-made Twitter accounts over 1,000 strong pilfering profiles for photos to put faces on bot accounts. ABC’s Australian offshoot publicized a report from a cybersecurity firm where the firm concluded 75% of Twitter content wasn’t made by human beings. And every so often, you can get a glimpse of the problem’s true extent. ChatGPT and other AIs are machines, and machines malfunction. When AIs malfunction, that’s when you get walls of tweets spamming, and Amazon products titled after, ChatGPT’s error message.
Now if the bots and AI accounts were a tool social media companies used to boost their numbers and therefore their profits…I’d still have a problem. That’s scummy. But of course, the geniuses running our online social lives could never settle for simple greed. The same New York Post article noted that these AI Twitter accounts peddled all kinds of cryptocurrency and blockchain scams, as well as every type of misinformation under the sun.
Similarly, in the late 2010s, online artists on a host of platforms found their content plagiarized by sketchy online T-shirt sellers that utilized bots to patrol social media for art to filch. These websites and the social media platforms they picked the pockets of naturally did nothing, so it was up to the users either tricking these sites and their bots into plastering their shirts with incriminating statements or provoking the wrath of corporations viciously protective of their IPs to get justice.
Lastly and most relevant to the idea of “nobody wants AI” is the tomfoolery tech companies have gotten up to re: AI. Google and Meta have both implemented AI features and neither have made them optional. Whenever my students have to do research, having to explain that Google’s AI Overview isn’t reliable has become a mandatory step. Meanwhile, Facebook and Instagram, likely in an attempt to get users to engage with a feature no one wanted, have put the virtual switch for Meta AI in their respective search bars. The only reason I can say I’ve “used” Meta AI is because these nimrods put the Meta AI button where I used to tap to search somebody up. In January, Meta announced they would be openly generating accounts with AI tO bOoSt EnGaGeMeNt. Because that’s how you get your numbers up, right? Not by putting users’ feeds back in chronological order or cutting ads and sponsored posts out of feeds or damming up the steady stream of misinformation on your platforms or not selling users’ data to the highest bidder. More friend requests from “people” with seven fingers on each hand and that waxy Botox overdose quality to their skin. That’s what people want, right?
Conclusion
Now that I’m on the other side of a rant that has been percolating for months, I think I have an analogy.
I mentioned BookTok a few sections back. First, RIP BookTok and the rest of TikTok. What should happen to Facebook happened to you. Back on topic. You could call me an anti-BookToker. Between having zero interest in BookTok’s preferred genres of romantasy and smut to the buzzword descriptions to the consistent feeling that millions of BookTokers were reading the same five books over and over, I Miles Morales’d it and did my own thing. The same kind of lowkey irritation I felt whenever I ventured into BookTok and had a legion of twenty-something white women push the slow burn enemies to lovers sharing a bed hockey player motorcyclist dark romantasy of the week on me is the same kind of lowkey irritation I feel about AI.
Elon? Mark? Jeff? Daniel Ek, CEO of Spotify? Are you guys listening? Of course you’re not, you probably can’t hear anything over the sound of your newest mansion being built. I’ll pretend like you can for my outro.
No one wants AI.
No one needs AI.
Implementation of AI is all but guaranteed to make a platform or a device worse.
So heed the paraphrasing of blonde supposed-to-be-teenage 2004 Rachel McAdams.
STOP TRYING TO MAKE AI HAPPEN, IT’S NOT GOING TO HAPPEN.

Noah Keene graduated from Calvin University in December 2021 with a major in creative writing and a minor in Spanish. He currently resides in his hometown of Detroit, Michigan. He spends his free time reading and putting his major to good use by working on his first novel. See what he’s reading by following him on Instagram @peachykeenebooks and read his other personal writing by going to thekeenechronicles.com.
