Like, every AI generated thing I’ve seen, when viewed from the eyes of someone who actually knows what they’re doing, is at best below average. Maybe some things aren’t quite as bad as the general “AI slop”, but of the things I’m actually experienced in (code and art), I just see so many amateur mistakes in everything AI.
Regarding art, AI can make really visually appealing things, but it gets the details wrong. That’s something that a below average artist does. And regarding code, it’s the same thing. Overall, it has the appearance of decent code, but it gets the details wrong, just like a below average dev. (Probably about the level of a high school senior or college freshman.)
I’m not super experienced at writing, but I can also tell that it’s not very good at that. The stories it writes just aren’t compelling, but I’m not experienced enough to tell you why. And the same with music. It’s just below average, but I couldn’t tell you why.
I’m not trying to sound elitist by saying this, but I’ve noticed people who aren’t very good at these things tend to praise how good the AI is.
So, is it just me, or are the big fans of AI just below average at whatever the AI is doing?
Oh, it gets so much more entertaining than this.
- Competent artists I know say that degenerative AI will never be able to do art, but they think it may be fine someday for writing or coding or such.
- Competent writers I know laugh at degenerative AI’s ever replacing them, but are pretty sure that someday it may replace coders or musicians.
- Competent coders I know shake their heads at even the very thought of degenerative AIs replacing them for anything more than rote work, but acknowledge that they’d likely be fine for music or art.
- Competent musicians laugh at the prospect of degenerative AIs ever making decent music, but nod sagely that it’s probably fine for art or writing.
It’s like a circle jerk of Gell-Mann.

I’m sure the lawyers are very excited about all of this. It’s an absolutely wonderful time to be a copyright attorney. We’re going to have lots of lawsuits regarding all the AI generated copyright infringement.
AI capital will be spent to undo copyright law or carve out exemptions. They cannot be beholden to the same rules as us lowly plebians.
Unfortunately, you’re probably right.
I’m getting really tired of it. I wish I was wrong more often about this kind of thing.
No, you’re just overestimating where the average is.
People are vastly stupider and more useless than you give them credit for, and in a slop saturated world we’ll all get even more stupid and useless due to rampant brain rot.
We’ll reach human level artificial intelligence not by making LLMs intelligent (that’s fundamentally impossible with their design), but by lowering humanity’s average intelligence until it’s below that of a slime mold.
And a significant percentage of humanity is already there.
The Oatmeal had a wonderful post about AI art last year that captures so many of my own feelings around this: A cartoonist’s review of AI art
Man, this is surprisingly good.
Wow, that is beautiful. So yeah, he and I mostly agree. I would say that AI probably should be heavily restricted, because right now it’s putting the entire economy into a really precarious place, and it’s also developed through extremely extensive copyright infringement. But yeah, that’s a great take.
I say let all the AI tech bros jam it into everything they want. When the bubble pops and all the giant corporations pushing this shit collapse it will free up space for a bunch of new little guys to move in and grow. THOSE are the ones that need to be restricted to make sure things never get so centralized again.
Hi, I’m a little guy! :) https://port87.com/
I hope Google and Microsoft never financially recover, so email is truly free (as in freedom) again.
PS: Restrict me, daddy.
PPS: I’m actually very in favor of restrictions for email providers. We should all have to play by the same rules.
Someone once said to me “If a creative project is of high quality, the longer you look at it the more details you’ll notice. If it’s bad, the longer you look at it, the more detail you notice is missing.”
And I think that about sums it up. AI slop is programmed to be eye catching but can’t produce much detail beyond that.
This is on point. I don’t think OP’s title is accurate. The “people” aren’t below average, the “work” is. Sure, the inexperienced or lazy are using AI when they shouldn’t, but most of the problems are with the work.
I also think many who rely on AI often don’t have much overlap with those that like to learn, who may be supplementing with or getting support from AI (or not using it at all). To me, anyone who has a drive to learn is going to create above average work eventually, and anyone who doesn’t will simply set the bar.
I agree with this. It also makes me wonder if it would even be possible for a nonprofit or government to train an image generator (for example) to produce more detailed work. The output of a corporation can only be “good enough”.
On the topic of details, ai also sucks ass at giving details meaning. Or implementing details in a way that makes sense.
LLMs are per definition “mediocre machines”. They are a statistical approach. The most common answer is far from the single best answer.
I would agree, yeah.
I think a big driving force is that people who are drawn to generative AI are more likely to be mediocre at a thing, as well as demoralised by the effort required to improve.
I can sympathise with that drive, at least. After all, I’m a pretty mediocre writer. I desperately wish I could be better, but I am so far away from where I’d like to be that it feels hopeless sometimes.
Sometimes I wish that I believed that it actually was hopeless, because then I could just give up on trying rather than having to bear the pain of practicing my way out of mediocrity. However, I care more about improving than I do about my discomfort, and so I keep going with the XP grind.
A big thing that keeps me going is that I have seen the power of practice. I’m still far from where I’d like to be (and no doubt when I reach that point, my ambition will have grown along with my skill such that I will still be satisfied), but I’m able to look back on my efforts of the last few years and see real progress.
That’s why I find people who use generative AI to be quite tragic — they’re like alternate timeline versions of myself. It’s more comfortable to believe that the reason you’re not good at things is because there are people who are Good at it, and people who are Bad at it. If it’s a case of immutable categories of capability, then you have an excuse not to try. What’s especially tragic is that when these demoralised novices use generative AI, that’s often because they still have that drive to create inside themselves.
But man, it sucks to see, because I know that they will never find the satisfaction they crave in these tools. Sure, they might make something they’re proud of, giving them a facsimile of fulfillment — but it won’t compare to what they could be feeling. When I argue against generative AI, I’m not just being anti-AI, but pro-Art. Actually, no, it’s more than that — I’m pro-passion. If they could cultivate the kind of vulnerability required to actually use and develop their inner passion, then I would treasure any piece of art or writing generated through that process. I might not enjoy the art itself, in its own right, but I don’t need to, because what I love most about art is that it’s a fundamentally human process, and so any creative work is best enjoyed with that context
Ugh, it drives me mad. I just want to grab them by the shoulders and shake them, while yelling “PLEASE COME AND JOIN US. I GENUINELY WANT TO SEE WHAT PASSIONS DRIVE YOUR URGE TO CREATE. I KNOW IT HURTS TO BE MEDIOCRE, BUT YOUR PASSIONS ARE WORTH PERSISTING FOR. WE’VE ALL BEEN THERE, AND WE WANT YOU HERE WITH US SO THAT WE CAN HELP SUPPORT YOU”. Alas, screaming at someone like this is not an effective evangelisation strategy — even if you tell them that we throw better parties, and that they’re invited
I wish I could upvote this more than once.
Generative AI has always, probably will always, be most attractive to the lazy and the cheap. And if a person is lazy and cheap then they don’t care about making something good. When the choice is between effort and crap, they will always choose crap.
That’s a really good way to put it.
The thing that no one every talks about in the software industry is how the majority of software developers are just barely good enough to get by.
I spent 10 years consulting and there are entire companies out there where nobody even knows what high quality code looks like.
LLMs are trained on all this so they produce at the same level. For most developers they don’t know the difference between good code and code that works (but is low quality).
In a world where no one cares about the code, and only cares that the product works (badly), LLMs are perfect.
I write code that no one is going to look at, ever (yet it goes in production).
Oh, I can recognize good code from code that works… I’m just not skilled enough to produce the former. (Does that put me ahead of most people by default?)
To me, one of the best ways to close that gap is the book The Pragmatic Programmer. Its old and if you ask me its still as valuable as ever. It’s not about any particular language. It’s about how to write high quality code in any language.
Hey, thanks. I wrote that response to be jokey, and I didn’t expect an actual, useful reply in return. I will check that out, thank you.
Listen, AI is a godsend for some people.
Take a totally fictional person who works in my IT department, Joe Foyle. He always tries talking to things that he flat out doesn’t understand, just oozing that “pay attention to me” and “I talk for the sake of me taking” vibe. For fucking years Joey boy here has been trying too hard to get noticed, often to the detriment of his own goals, and has refused to take constrictive criticism/gentle guidance to dial that shit back.
He simultaneously is upset that others are smarter than him while at the same time refusing to better himself with training, mentorship, and/or reading.
Joe fucking loves AI.
Now he can be the one (instead of the other principal engineers) talking about things that interest the C-suite. Now when Joe talks, people have to listen because AI is the future… holy shit, did Joe just become a goddamned futurist?
AI has to work because otherwise Mr. Foyle will be proven (again) to be full of shit. And so he pushes harder and harder the narrative that AI is the future and can do no wrongs.
So in answer to your original question, OP - yeah, the biggest fan of AI that I (hypothetically) know is below average at doing things that he is supposed to be doing.
I think it’s like that dunning-kruger idea. People who are bad at things don’t know they’re bad, and are poor judges of quality.
So someone who’s kind of bad at coding isn’t going to know or understand what the LLM puts out, so they won’t fix as many issues.
Also humans are lazy, and when presented with something that looks good at a glance, we don’t really want to dive deeper.
I saw a PR from someone today at work. Guy’s nice but I don’t think he’s much of a programmer. He asked copilot to fix a warning. It did, and generated a linter error. So he asked it to fix that. It did, but for whatever reason decided to delete an entire function call.
Unfortunately that part of the code has no unit tests, so he just pushed it up for review. I look at it and I’m like if that call is important, don’t delete it. If it can be deleted, remove the now-unused code around it. We’ll see what he says.
He probably spent more time fussing with copilot than it would have taken to do it right in the first place.
I think it’s like that dunning-kruger idea. People who are bad at things don’t know they’re bad, and are poor judges of quality.
The amount of people that have shown me their really low quality tattoo and believed it was really good work is way higher than people with legit good body art showing it off.
People will buy a $75 printed decor canvas of a plant from hobby lobby before buying real artwork from a local artist,
It was these experiences that made me the pretentious art snob I am today.
Most people don’t know art appreciation.
Not even the base level critique that anyone should be able to do.
I think it’s like that dunning-kruger idea. People who are bad at things don’t know they’re bad, and are poor judges of quality.
Precede the line with a
to make a block quote.And precede it with # if you want to yell!
Thank you. I couldn’t remember what symbol it was.
Thank goodness its not just me
Its the conundrum. “When i ask (random slop machine) its so smart and gives me answers!!”
“Did you ask it things you already know?”
“No. But look at the answers!!”
People have no idea how much they’re damaging their brains.
I think people usually use genAI to cut corners. Rather than learn the skill themselves (and develop the sense of what makes the result good/bad), they just go with the zero-effort option.
In my writing group, there’s one guy who’s very enthusiastic about AI; everybody else hates it.
The one who loves AI is a Libertarian who admits he’s never read any book except Conan the Barbarian, which he thinks is the pinnacle of literature and whose writing is kind of bad and extremely political. (He wrote a short story about being fined for not using someone’s correct pronouns because that was, and I quote, “The scariest thing I could imagine.”)
I don’t think it’s at all a coincidence that the one guy who’s enthusiastic about having Grok (of course he uses Grok) ‘edit’ all his writing is, pretty obviously, the worst writer in the group.
the worst writer in the group.
Sound to me like he’s the worst person in the group.
Do you still have that story? How much was the fine and I NEED to know what the ending was like
Nah, it was only presented during the group meeting. Never got a copy.









