If you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time. Micromanaging them, watching to preempt slop and derailment, is frustrating and rage-inducing.
Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.
Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.
But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.
Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.
Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.
But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.