Interesting, some effort required to keep it on track:
Claude can be very persistent when it really wants to do something. I’ve seen Claude run the contents of a
makecommand whenmakeitself is blocked, or write a Python script to edit a file it’s been told it can’t edit. But hooks at least offer better enforcement than prompting alone.I’ve definitely seen it be stubborn like that in my tinkering with it, just absolutely locked on to a specific approach like a dog with a bone, even after I’ve already started nudging it to move on and try something else. I assume that’s a result of “recency bias” in its memory, missing the forest for the trees, because I don’t need that solution to work, I need a solution to my original problem, preferably the most elegant and least hacky solution.
Certainly one of the things that indicates to me that LLMs will be best used by someone who knows what they’re doing for the foreseeable future. Shame it also creates so much deskilling and discourages learning those skills in the first place. Absolutely something that worries me for our future.
Best to make a new session when that happens .
Solid advice, I’ve been using the “restore checkpoint” feature to go back to before it started trying an approach without totally tossing my context, and it’s certainly easier than arguing with it or ignoring its reminders to try X.


