Not a ragebait post.
I started thinking why I hate AI and it’s mostly:
- It is pushed down my throat very hard for what it does;
- The unauthorized use of content on the internet;
- The worsening of the environmental crisis;
- The content it generates is shit.
I am wondering do you have other arguments against it?
LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.
That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.
My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.
Name a major AI company that isn’t currently attempting to circumvent government agencies and usurp the democratic power of control away from the citizens.
that’s reason number 1.
reason number 2. I never trust any solution that has to be forced on people. I have to provide proof that I use it in my job because they made it a KPI. think about that. my employment is 100% contingent on proof that I’m forced to provide that doesn’t add anything positive to my role. why in the fuck would this even be required?? if it quacks like a piece of shit, and smells like a piece of shit…
reason 3, because the conservation of human expression is important to me. from simple artistic expression to spoken or written word. all are sacred to me and anything that attempts to eliminate and emulate that expression is only a form of oppression against those who express.
- AI actively disincentivizes young people from reading, writing, thinking, learning
- It’s being positioned as a perfect advertisement and propaganda tool
not only young people. it’s gonna be like social media all over again where everyone stops thinking (further) if they play their cards right.
Your first point was argued by Socrates about reading and writing. He thought reading and writing would make the youth lazy and not want to use their memories like he and his forefather’s had.
I’m not for AI just saying that this isn’t a great argument. Kids will just learn and memorize things differently.
We absolutely need some laws to prevent these tools from becoming propaganda tools since we can already see what they’re capable of in terms of influencing thought.
Socrates wasn’t wrong, though. He and his forefathers trained their memorization abilities to be able to record and recall vast amounts of information. We simply don’t need to do that, and now we can’t. If you forget the details, just look it up. Easy.
If that applies to all human knowledge and learning instead of just memorization, that becomes a problem. In the US specifically children are, in fact, learning and memorizing fewer things. Reading and math scores are at record lows. SAT scores are down and falling further. Literacy itself is on the decline, people not only read less but they’re also worse at retention and comprehension.
The pandemic lockdowns were half a decade ago, we can’t just keep blaming them. Something is hurting our ability to learn.
It empowers dull people to flood the world with dull sludge, drowning out those with actual talent and a creative spark. Great writers connect things that have never been connected before - Douglas Adams wrote “they hung in the sky in much the same way as bricks don’t”. AI could never create that sentence because it connects things that have never been connected before. AI could come up with a million similies for things in the sky which have all been used before, but never an original one which shouldn’t work, but does.
I don’t believe dull people actually exist, there are only people whose spark hasn’t been lit.
AI is snuffing out those sparks en masse, preventing many of our greatest future talent to ever exist in the first place.
This machine will kill us all, and it won’t even need to be smarter than us to do so…
- because it’s “THE NEXT BIG THING TM”, like the metaverse, 8k tvs, cryptocoins, etc, thus being sold as the be-all end-all savior of humanity;
- because many, many, many economy related reasons (nvidia, circular bubble, stupid money being thrown around nonstop, environment, etc)
- because some people are 100% trusting the output, even when it’s easily unproven bullshit or it looks/works like shit
- it’s a culmination of years and years of every internet user’s unaware or half-aware work, and now we’re supposed to be fawning over that shit
- because it’s empowering bullshitters and scammers: it’s never been easier to create pieces of shit in the hopes of earning money out of it - websites, text, code, music, drawings, videos.
- adding to the above, it’s making a bad problem exponentially worse, that of the “dead internet theory”. By 2021, before any publicly available “AI”, SEO shit sites and videos were already making life awful for anyone that wanted to find something. Nowadays, I would wager that over half of google’s top 100 sites of any given search are llm generated, 40% using old style SEO shenanigans that always manage to get the exact search term in its body.
I am perfectly capable of failing a task by myself.
People use it to fabricate evidence convincingly.
People use it to pad content that could have been brief.
Unimaginative people flood content streams with low quality stuff making it even harder to find good content.
We are throwing every technical and financial resource we can. Starving other needs.
Douches wont shut up about it.
The creative slop will be a persistent plague, though some of the other stuff will become more tolerable when the bubble pops.
It blurs the line of accountibility, and it provides the facade of super intelligence, leading to negligent use of it.
AI cannot be held accountable. It physically can’t. You can’t criminally charge, fine, or imprison an algorithm. IBM reasons that because of this, it should not hold any position of management, or make major decisions autonomously.
Despite that, we are constantly see being used in increasingly high stakes desicions, and advising of such. AI lawyers, politicians using it to communicate with their voters and “summarizing” their concerns, AI in HR management, AI professors (as well as professors using AI) and the list goes on. There is no recourse for malpractice in these scenarios, and allows bad actors to work with impunity. Nothing ever stopped anyone from spewing nonsense, that’s what freedom of speech is for, but the reputation of such peoples would be tarnished, theyd become outcasts in their field, and their writings disregarded. AI blurs that once again.
Closely related to the issue of liability, is the negligent use of AI. If someone wanted to create misinformation, they had to have malicious intent. Now, out of pure laziness, or profit driven desire, most content has become AI. With all of it its hallucinations and delusions included. Because AI training data now includes AI content, these delusions cause the model to become “inbred” which cause it repeat its own lies, until its regurgitated as fact.
This in turns causes a death of truth, and all profesions who hinge on providing the truth. Journalists, researches, scientists, publishers and writers of academic journals, as well as small communities of hobbyists, being drowned in misinformation about their own niche craft. It destroys and buries real, truthful and productive conversation, while hindering all intellectual progress.
Its existence is a fantasy for anti-intellectual actors, which include government, and large corporate entities who’s greatest enemy is a well informed, and educated public.
A couple of reasons, besides the obvious:
-
It promotes brainrot and discourages us from being creative and doing real research ourselves. It may take longer but manual development is more valuable and unique.
-
It warps our perception of reality. With the way LLMs word their answers, they seem really convincing. Later you might realise it was actually wrong or only partially correct. This is problematic when users search for mental health advice, career planning, legal advice, etc.
-
Many, including the American government, use AI-generated slop to spread propaganda and misinformation more effectively than ever. It’s scary just thinking about how many people can’t recognise the difference between AI and real, and those are usually the ones voting against the collective good of society.
-
It’s just not worth it. It makes mistakes, it hallucinates, it forgets… With the time we spend trying to get the AI model to generate what we need in attempt to skip the hard work or the knowledgeable, we might probably be able to do a proper piece of work if we put in the effort. When ChatGPT first came out, I admittedly used it a lot for my assignments, and I would say it was more hindering than useful. At the end of the day, I didn’t learn anything and I wasn’t satisfied with the work. “If you want something done right, you gotta do it yourself.”
-
- Deepfakes. And they’re only going to get better.
- Google is ruined. It’s all AI slop websites now.
- I genuinely hate how positive it is, how it writes 1000 words for everything, and gives 7 part answers to simple questions. No matter how many times I tell it to give shorter answers.
- Frequently wrong. Every little detail must be double checked.
- It can be kind of a dick about enforcing copyright or random things. I got in an argument with it recently when it refused to give me a movie quote. One sentence. After 5-10 back and forth messages, it told me the quote.
- We’re opening the door to charging for simple Google type searches. I worry that 15 years from now, I’ll have to pay like $1 per question.
“after 5-10 back and forth messages it told me the quote”
What the actual shit are you even doing? Genuinely, honestly and seriously.
In your own admission you hate the way it speaks, you hate how it’s used, you hate that you have to jump through hoops to get any semblance of a correct answer out of it, and you hate how it’s effected the internet as a whole.
SO WHY ARE YOU USING IT? Stop paying these companies with money or data, stop letting them claim your time, and stop supporting things you admit are actively making the world a worse place.
Could/should have just used a search engine…
I asked a river rock and gained a deeper understanding than any corporation’s mouthbot could ever provide. And this cool rock.
Because Google is useless now. I tried Google first. Though AI was pretty useless on that occasion also.
It can be kind of a dick about enforcing copyright or random things.
Oh the irony…
We’re opening the door to charging for simple Google type searches. I worry that 15 years from now, I’ll have to pay like $1 per question.
Search engines cost money to run. If you aren’t paying for the product, you’re being monetized some other way. You’re the product that’s served up to the real customers - the businesses who buy data and advertising.
That’s why Kagi is a thing even though there’s free search engines all over the place - you pay a monthly subscription, and then you are the search engine’s customer.
Kagi is honestly amazing. I was expecting it to be a little better than Google, but most of the time it even takes me back to a time before the great sloppification. I look stuff up and I find it, that’s all I want and I am gladly paying for that.
I don’t. I hate machine learning slop being marketed as “AI” and assholes buying up years of hardware stock & burning through water supplies and energy like they want this planet to become uninhabitable within a decade.
Apart from the obvious environmental issues, I hate that “AI” promotes lazyness.
I work as a software developer and over the last months, I slipped into a habit of letting ChatGPT write more and more code for me. It’s just so easy to do! Write a function here, do some documentation there, do all of the boilerplate for me, set up some pre-commit hooks, …
Two weeks ago I deleted my OpenAI account and forced myself to write all code without LLMs, just as I did before. Because there is one very real problem of excessive AI useage in software development: Skill atrophy.
I was actively losing knowledge. Sometimes I had to look up the easiest things (like builtin Javascript functions) I was definitely able to work with off the top of my head just a year ago. I turned away from being an actual developer to someone chatting with a machine. I slowly lost the fun in coding, because I outsourced the problem solving aspects that gave me a dopamine boost to the AI. I basically became a glorified copypaster.
This is what all of those big AI companies want. They want people being dependent on their stupid little chatbots, just so they can suck a monthly subscription out of you. That really doesn’t sit right with me - I always wrote code to pay my bills and paying someone else to write that code for me feels disingenuous, in a way. I would probably be more open about AI and use it more if I had the option to host it locally. But now they’re hoarding all of the memory, CPU’s and other technology that would enable me to do so and drive their prices into unobtanium territory, and they can all get fucked for this.
I don’t want to be a “prompt engineer” and outsource my brain into an LLM. Thank god my employer doesn’t force me to use any AI at all and I don’t have to be fast, I just have to be fast enough and produce quality code. And I can do this all by myself, I always could.
I do feel like the last man standing, sometimes. Almost all of my colleagues and friends (who are also developers) have drank the AI-koolaid by now and I get so many messages like “We have Windsurf at our company now, you must use it or you’ll be left behind!”. It’s so hard to push back and resist this hype cycle, especially for students and junior developers, because they don’t have much experience and can be so easily exploited by their employers and AI techbros…
So that’s (mostly) what I hate. A good technology that’s being misused by a capitalistic system. Again.
Their data centers are absolute water chugging units, and that’s not a compliment. They suck up so much water from the rest of the population, it makes me wonder if the ex-CEO of Nestlé maybe had a point when he said we should be charging for water that isn’t used for “fundamental needs.”
Just consider, in Mesa, Maricopa, Meta has one datacentre, Google has another and is building two more, while Microsoft already has two. But the state of Arizona denied permits to construct homes, specifically citing a lack of groundwater.
And I haven’t even mentioned the other health effects it seems to be having on local populations
But if can take a moment to be a little more selfish, the cost of RAM as a result of this whole stupid thing has especially pissed me off. The price for DRAM and NAND have gone so out of control, we can’t get the pricing for new technologies cause they’re still under review due to unclear memory costs. Steam decks are completely sold out, with no new models being developed, and while they haven’t said anything, I have no doubts that this will delay the release of the Steam machine, much like how Sony is now considering a delay to the PlayStation 6 for the same reason.
I hate it because I won’t be able to escape from it. It will permeate everything and destroy whatever bit of functional society we have left. Forget about the internet becoming nothing but AI bots talking to each other, eventually most IRL interactions will be diverted to AI or have to be screened through AI. You already can’t talk to a human at any online businesses, and even companies that have phone numbers route you through endless menus–those will all become AI bots too, and repeating “representative” into the phone will do you no longer do any good.
Even doctors are already using it now, to shave a few more minutes off each appointment, by getting an AI summary of the patient’s records (probably full of wrong info) so they don’t have to bother to read the chart. Then they record the visit and get an AI summary of it (again likely full of errors), so they don’t have to write anything either. That is already happening now. It’s bad enough now when you can usually only get in to see the nurse practitioner instead of the doctor (while paying the same fee as when you do see the doctor), it won’t be long before we’ll be limited to chatting with an “AI practitioner” (and still paying the same rate).
It drives up the price of consumer electronics due to AI firms purchasing RAM, Storage, and GPUs.
It uses up potable water that we need for drinking, agriculture, and other vital uses
It’s not even reliable for the costs that it has
If it were reliable, it’d threaten the livelihood of millions.








