I’m just so dumbfounded that this isn’t obvious to everyone who has 1. average intelligence, 2. a five minute explanation of how it works.
You should trust it exactly as much as a magic 8 ball. Alternatively, replace all source reference of “according to << favorite packaged LLM >> …” with “according to my 10 year old nephew who is playing a game of never-say-you-don’t-know…”.
Which isn’t to say that LLMs can’t be useful. But if you trust any fact based output from such a text generator, that you can’t (or don’t) verify yourself, you seem exactly as dumb and liable as if you said “but… but… the magic 8 ball said it would be fine!”.
The vast majority of my work, if I ran it through an LLM, would make it mandatory to do more testing and verification than is needed in the first place… so there’s no goddamn point.
Someone at work: OMG, I can’t believe I haven’t tried Copilot before, this is so great! Look, I asked it about how to do the thing in the framework and it came back and told me the pattern!
Me: Types the same prompt into Copilot, but replaces name of the framework with a very clearly made up word. Gets similar response telling me confidently how to do it in my made up framework.
Them: Ah, right. You did say bullshit generator, I get it now.
At least you can ask your nephew for a source. LLMs nowadays obfuscate their plagiarism quite well. Not like early ones where with any sufficiently advanced topic they started quoting the few scientific sources in their training data near directly.
I’m just so dumbfounded that this isn’t obvious to everyone who has 1. average intelligence, 2. a five minute explanation of how it works.
You should trust it exactly as much as a magic 8 ball. Alternatively, replace all source reference of “according to << favorite packaged LLM >> …” with “according to my 10 year old nephew who is playing a game of never-say-you-don’t-know…”.
Which isn’t to say that LLMs can’t be useful. But if you trust any fact based output from such a text generator, that you can’t (or don’t) verify yourself, you seem exactly as dumb and liable as if you said “but… but… the magic 8 ball said it would be fine!”.
And if you have to do the research yourself anyway to verify what the LLM spits out, you might as well start with that, forget the AI, and save time.
This is where I land.
The vast majority of my work, if I ran it through an LLM, would make it mandatory to do more testing and verification than is needed in the first place… so there’s no goddamn point.
And it’s not even just that. There are many reasons why using AI is detrimental, even when it’s supposed to (or does) make things “easier”.
https://fortune.com/2026/03/13/ai-isnt-reducing-workloads-its-straining-employees-time-spent-emailing-doubled-deep-focus-work-fell/
https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/
https://dmnews.com/a-if-youve-started-using-ai-to-write-emails-you-used-to-write-yourself-psychology-says-you-havent-saved-time-youve-outsourced-a-micro-decision-that-was-quietly-telling-you-something-about/
Ok, so heres how this works.
Step 1: Apparently you have never worked anywhere near ‘customer service’ in a tech related way.
Step 2: You are vastly, vastly overestimating the intelligence of the average user/person.
Sorry, most people are just fucking idiots who act far more competent, in general, at any/everything, than they actually are.
That’s it, there are no more steps.
Your baseline for ‘average person’ is actually more like top ~25 to ~10 % of people.
The average adult American reads at a 5th-6th grade level.
That is your actual average, the intelligence of an 11 year old.
The average adult American is your 10 y.o. nephew, just bigger, and more cocky.
The “average American reading level” is vastly skewed by ESLs.
Someone at work: OMG, I can’t believe I haven’t tried Copilot before, this is so great! Look, I asked it about how to do the thing in the framework and it came back and told me the pattern!
Me: Types the same prompt into Copilot, but replaces name of the framework with a very clearly made up word. Gets similar response telling me confidently how to do it in my made up framework.
Them: Ah, right. You did say bullshit generator, I get it now.
At least you can ask your nephew for a source. LLMs nowadays obfuscate their plagiarism quite well. Not like early ones where with any sufficiently advanced topic they started quoting the few scientific sources in their training data near directly.