Inspired by a recent talk from Richard Stallman.
From Slashdot:
Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”
Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.
Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.


Ah… Something just dawned on me.
Didn’t he … I think I’ll just quote Wackypedia for this:
In 1971 there was nothing that was intelligent at all in the world of computing. (And, as is normal, in 99.44% of humanity. This is a constant. 😉) It’s almost as if the term “Artificial Intelligence” has never meant, you know, actual intelligence. And it goes on:
That’s an awful lot of “not intelligent at all” places he’s worked for or been affiliated with that use the term artificial intelligence…
Yeah “AI” was always a marketing term to drum up grant money and investor interest.
It was always and only meant to trick people into thinking that it meant “actual intelligence”