Why Do People Hate Generative AI So Much?
Let me give you my reasons
I don’t use AI for my work here. World-Weary’s content is 100% human made.
That’s as much an ethical stand as it is a practical one; the kind of writing I do involves a lot of reading and citing reports and historical documents.
In one article where I discussed the Runit Island Tomb, I went so far as to find a letter from the Marshall Islanders to the United Nations in 1954 and linked a PDF document version for y’all to read.
I dig. I really do. I take it seriously. I try to present as much factual information as possible, although of course, I’m not immune to bias.
I’m only human.
I do my best to fact check — though I do make mistakes, of course — and provide sources when possible. I often wind up realizing that my own beliefs on a particular subject might not be ironclad, and I learn just as much as I teach.
Hell, I don’t even use spellcheckers very often! I used to use Grammarly, but it annoyed the heck out of me and I eventually cut it out of my process. AI spellcheckers and editing software are prone to errors in judgement, too. I’ve published rants about that.
If I’m unsure of my spelling, I flex my old school editing skills and look it up. And yeah, I sometimes miss a typo here and there. I’m not too fussed about it; I’m human.
As an artist, imperfections are nothing to panic about as long as you’re being authentic in what you present. If I can find spelling errors in professionally-edited best-selling novels — and I have — then I figure you can forgive a couple messy sentences here and there.
I do a lot of editing and re-writing as I go. It often involves questioning numbers and statistics I come across and double checking to make sure I’m not getting faulty information from dubious source material.
I do that because I worry about the quality and veracity of the information I provide.
AI doesn’t.
It doesn’t research, it doesn’t fact-check or verify. It only summarizes everything it finds relating to the keywords you request; whether that’s useful information or not is not something it can judge.
If you ask it for information on vaccines, it will be pulling information from anti-vaxxer websites as much as it pulls from peer-reviewed medical journals.
And no, that’s not a ‘difference of opinion.’ That’s a disagreement on the facts based on a lack of education and fear. Always get your medical information from experts, not Facebook.
It also can’t really take context into account; AI text-generating and editing software often works by taking each line on its own, rather than ‘reading’ and analysing the whole piece to decide whether it’s looking at a mistake or an intentional bit of wordplay.
It’s especially irritating from the perspective of someone who practices religion, culture and traditions from outside of the default norm of their country. If your language and traditions aren’t taken into account in the programming, you can be treated as if you don’t exist.
It’s discrimination by neglect.
From my rant about Grammarly:
“This software is a tool. It can be very helpful. But you cannot let it do the job without oversight; that’s just not going to work.
Suppose I have identified myself in an essay as a polytheistic pagan, someone who worships multiple deities. In that case, when referencing the multiple deities of my religion, a lowercase plural ‘gods’ is correct.
A human reading that piece would recognize that my choice of plural and lowercase letters is appropriate and sensible given the context.
An uppercase singular ‘God’ is the Christian spelling. It obviously wouldn’t fit, so a human editor would leave it as is and would not change it.
But Grammarly does it almost every single time. Because Grammarly is programmed from the perspective of a culture that is primarily Christian, ‘God’ is the only spelling that it recognizes.
That’s a pretty frustrating thing to go back and fix, and it’s a dead giveaway that Grammarly did the editing rather than a human proofreader.”
Using AI text generation for my work would not only be lazy, it would be a disservice to you, the readers to whom I’m trying to make my points.
That’s my main practical reason for not using AI.
My ethical reasons for not using AI range from environmental issues to worker’s rights.
Writers and artists are on the ropes right now. And the way things are going, in a few short years we’re going to be in serious trouble.
Mass-produced, cheap, low-quality writing is far more attractive to major corporations than having to interview, vet, and pay actual writers and editors who know what we’re doing.
They don’t care if their copy makes sense. They only care if it fills the space and draws eyes.
Corporations exist to maximize profits and reduce costs as much as they possibly can. That’s the Capitalist’s mindset; more money good, less money bad.
People? Who cares about people?
Since generative AI hit the market, the demand for writers, artists, coders and other creative experts took a nosedive. That’s not a good thing. A lot of us lost work, income, and opportunities for our careers.
The number of established jobs lost has been relatively minor so far, but what about new hires? What about freelancers?
Hah. Funny.
I was making around $2,000 a month off of my freelance writing for a while there. Now I barely scrape a hundred on a very, very good month. Most of the time, not even that. Thank goodness for my full-time job or I’d be in dire straights. People who can’t find work, or can’t work due to disability, are far worse off than I am.
Our output didn’t change, the perceived value of that output did.
My time and effort became worthless. When someone can just type a quick prompt into an AI program and get results in minutes for minimal cost, why pay somebody by word count?
On top of that, the market is now saturated with poorly formatted, poorly plotted, poorly written AI slop that someone spent an hour on and threw at the internet to make a quick buck.
I bought a cookbook not long ago from a major bookseller in my country (I’m withholding the name just to avoid being sued, bear with me) and wound up having to return it because I could tell at a glance it was AI generated from start to finish.
It was bad. From the bizarrely long title to the un-edited recipes which made no sense, to the complete lack of formatting and the lack of pictures — somebody didn’t even try to make it look like it was made by a human being.
They self-published it, wrote a cheery description and sold it to major chain bookstores. Those bookstores stuck it up online for a ridiculous price considering what it was.
You bet your butt I lodged a complaint when I brought it back. I should have known better from the title, but honestly, they shouldn’t have even been selling it to begin with.
As an aside, this is not a jab at self-published writers. I mean hell, what do you think I’m doing on Substack? It’s my whole job here. But it does make it easy for these AI “writers” to scam people.
And let’s be real, it is a scam.
There are a lot of problems with this whole mess we’ve created, not least of which the issue of declining literacy and related skills. Critical thinking and analytical skills are losing ground to the ease of asking ChatGPT to tell you what to believe.
The University of Calgary put out a short warning about exactly this problem. It emphasized that AI could be helpful when used as a tool to aid individualized learning, but that over-reliance on AI was massively detrimental to a student’s progress.
It threatened their ability to excell in further education by limiting their engagement with source materials and negating the need to study. If you just tell ChatGPT to write your essay on your chosen topic and submit what it gives you… what have you learned?
Teachers can ban it from the classroom, but you have no control over what the kids do when they get home.
There’s also the impact it has on culture. When generic, AI-generated art of any stripe becomes our main source of creative expression… where’s the soul?
Where’s the thought-provoking statement behind the piece? The themes? The point you’re trying to make beyond publishing something pretty and appealing? What are we supposed to learn from what you’ve created?
There’s a place for art and writing that exists purely for enjoyment, but it shouldn’t be the majority of published work!
And furthermore, where’s the trust? Because half the time now, I see a beautiful piece of work posted online and the entire comment section is just an argument over whether it’s AI or not.
That’s not good. Especially when you factor in the misinformation and propaganda spiel we’re locked in now; public trust is at an all-time low. People can generate videos, pictures and voices so disturbingly realistic that nobody can trust video footage or audio evidence anymore.
When entirely faked soundbites and fake news clips are dominating the digital media ecosystem, how do we know what’s real?
When you publish a book written by AI, you’re not trying to create a commentary or provide reliable, trustworthy information. It’s feeding you generalized points it pulled out of a hat, depending on the prompt you gave it.
Compared to someone who has spent years pouring their heart and soul into improving their craft, refining a message and expressing themselves and their perspectives through their work, do you think you’re making something worthwhile?
Worthwhile enough to make up for the irrevocable damage it’s doing to the rest of us, our culture and our environment?
Is it worth the outrageous cost in vital resources, the career stagnation, the complete restructuring of the economy, the flooding of the markets? Is it worth the loss of education as kids use generative AI to write their essays and cheat their way through school?
Is it worth the culture where people online use ChatGPT and Grok like trusted encyclopedias? Is it worth the loss of trust in our journalistic institutions?
Is it worth the loss of creative self-expression?
I don’t mean to sound elitist, guys, but come the fuck on! We have to draw a line somewhere.
The worst part is that I don’t think we can go back. The cat’s out of the bag, and we’re all getting flogged. We built Jurassic Park and we brought in the customers, and now when we see the raptors eating people we just call it fake news.
I don’t know what to do about this. But I know that I find it fucking depressing.
And that’s why I can promise you, World-Weary’s content is 100% human made, with all of the typos that entails.
Solidarity wins.


The day that a so called 'professional' editor ran a part of a chapter of one of my stories through Grammarly and completely negated my main character's Irish voice, - that was the day in which I refused to have anything to do with AI as a 'writer's assistant'.
Google's AI answer bot now seen at the top of most search engine queries often has a 'Answers may be inaccurate' warning in a tiny font at the bottom of whatever load of nonsense the bot has spat up.
AI now has a dangerous negative feedback loop. AI is trained by whatever information the AI companies can get their hands on. At first it was classic books, Wikipedia, etc… decent sources.
Then it expanded to whatever rants people made on X, Facebook, Instagram, biased news sites, etc.
Then AI started using what I can only describe as “low quality sources”. AI now generated “news”… at an ever increasing rate… which is ingested into the AIs at an equally ever-increasing rate. This negative feedback loop steadily decreases the quality of what AI produces.
So it gets worse and worse. Steve Bannon wrote about how to change the way people think about politics and social issues is to “flood the news with bullshit” and that is what is happening at an increasingly high rate. “Truth Social” is an oxymoron.
In other words: AI is not only helping to make people more and more uninformed of the truth, it is making ITSELF less and less reliable. But part of that loop is people: they can no longer tell truth from fiction.
The new dark ages are approaching as society dims Truth’s lights.