Well there’s two different layers of discussions that people mix together. One is the discussion in abstract about what it means to be human, the limits of our physical existence, the hubris of technological advancement, the feasibility of singularity, etc… I have opinions here for sure, but the whole topic is open ended and multipolar.
The other is the tangible: the datacenter building, oil burning, water wasting, slop creating, culture exploiting, propoganda manufacturing reality. Here there’s barely any ethical wiggle room and you’re either honest or deluding yourself. But the mere existence of generative Ai can still drive some interesting, if niche, debates (ownership of information, trust in authority and narrative, the cost of convenience…).
So there are different readings of the original meme depending on where you’re coming from:
- A deconstruction of the relationship between humans and artificial intelligence – funny
- A jab at all techbros selling an AGI singularity – pretty good
- Painting anyone with an interest in LLM as an idiot – meh
I don’t think it’s contrarian to like some of those readings/discussions but still be disappointed in the usual shouting matches.



As a counter: you only think you know what an apple is. You have had experiences interacting N instances of objects which share a sufficient set of “apple” characteristics. I have had similar experiences, but not identical. You and I merely agree that there are some imprecise bounds of observable traits that make something “apple-ish”.
Imagine someone who has never even heard of an apple. We put them in a class for a year and train them on all possible, quantifiable traits of an apple. We expose them, in isolation, to:
You can go as far as you like, giving this person a PhD in botanical sciences, just as long as nothing they experience is a combination of traits that would normally be described as an apple.
Now take this person out of the classroom and give them some fruit. Do they know it’s an apple? At what point did they gain the knowledge; could we have pulled them out earlier? What if we only covered Granny Smith green apples, is their tangential expertise useless in a conversation about Gala apples?
This isn’t even so far fetched. We have many expert paleontologists and nobody has ever seen a dinosaur. Hell, they generally don’t even have real, organic pieces of animals. Just rocks in the shape of bones, footprints, and other tangential evidence we can find in the strata. But just from their narrow study, they can make useful contributions to other fields like climatology or evolutionary theory.
An LLM only happens to be trained on text because it’s cheap and plentiful, but the framework of a neural network could be applied to any data. The human brain consumes about 125MB/s in sensory data, conscious thought grinds at about 10 bits/s, and each synapse could store about 4.7 bits of information for a total memory capacity in the range of ~1 petabyte. That system is certainly several orders of magnitude more powerful than any random LLM we have running in a datacenter, but not out of the realm of possibility.
We could, with our current tech and enough resources, make something that matches the complexity of the human brain. You just need a shit ton of processing power and lots of well groomed data. With even more dedication we might match the dynamic behavior, mirroring the growth and development of the brain (though that’s much harder). Would it be as efficient and robust as a normal brain? Probably not. But it could be indistinguishable in function; just as fallible as any human working from the same sensory input.
At a higher complexity it ceases being a toy Chinese Room and turns into a Philosophical Zombie. But if it can replicate the reactions of a human… does intentionality, personhood or “having a mind” matter? Is it any less useful than, say, an average employee who might fuck up an email or occasionally fail to grasp a problem or be sometimes confidently incorrect?