This person is right. But I think the methods we use to train them are what’s fundamentally wrong. Brute-force learning? Randomised datasets past the coherence/comprehension threshold? And the rationale is that this is done for the sake of optimisation and the name of efficiency? I can see that overfitting is a problem, but did anyone look hard enough at this problem? Or did someone just jump a fence at the time and then everyone decided to follow along and roll with it because it “worked” and it somehow became the golden standard that nobody can question at this point?
The researchers in the academic field of machine learning who came up with LLMs are certainly aware of their limitations and are exploring other possibilities, but unfortunately what happened in industry is that people noticed that one particular approach was good enough to look impressive and then everyone jumped on that bandwagon.
If you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time. Micromanaging them, watching to preempt slop and derailment, is frustrating and rage-inducing.
Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.
Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.
But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.
I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.
90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.
Glad to see there’s other programmers out there who actually take pride in their work.
your experience isnt other peoples experience. just because you can’t get results doesnt mean the trchnology is invalid, just your use of it.
“skill issue” as the youngers say
I’d rather hone my skills at writing better, more intelligible code than spend that same time learning how to make LLMs output slightly less shit code.
Whenever we don’t actively use and train our skills, they will inevitably atrophy. Something I think about quite often on this topic is Plato’s argument against writing. His view is that writing things down is “a recipe not for memory, but for reminder”, leading to a reduction in one’s capacity for recall and thinking. I don’t disagree with this, but where I differ is that I find it a worthwhile tradeoff when accounting for all the ways that writing increases my mental capacities.
For me, weighing the tradeoff is the most important gauge of whether a given tool is worthwhile or not. And personally, using an LLM for coding is not worth it when considering what I gain Vs lose from prioritising that over growing my existing skills and knowledge
I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:
And if you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.___
At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.


